- 浏览: 245318 次
-
最新评论
基础环境:
namenode 192.168.1.187 kafka3
datanode 192.168.1.188 kafka4
datanode 192.168.1.189 kafka1
这个集群是自己下的hadoop-*.tar.gz包逐个服务安装的,因此配置文件都需要手动修改,相对cloudera manager的要复杂一些。
hadoop 2.6.2
hive 2.0.1 --只安装在了187上面
1.启动hadoop
./start-all.sh
2.配置hive
[root@kafka3 conf]# cat hive-site.xml
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value>
<description>location of default database for the warehouse</description>
</property>
<property>
<name>hive.querylog.location</name>
<value>/hadoop/hive/log</value>
<description>Location of Hive run time structured log file</description>
</property>
<property>
<name>mapred.job.tracker</name>
<value>http://192.168.1.187:9001</value>
</property>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>hive.server2.thrift.port</name>
<value>10000</value>
</property>
<property>
<name>hive.server2.thrift.bind.host</name>
<value>192.168.1.187</value>
</property>
<property>
<property>
<name>hive.server2.enable.doAs</name>
<value>true</value>
</property>
<name>hive.hwi.listen.port </name>
<value>9999</value>
<description>This is the port the Hive Web Interface will listen on </description>
</property>
<property>
<name>datanucleus.autoCreateSchema </name>
<value>false</value>
</property>
<property>
<name>datanucleus.fixedDatastore </name>
<value>true</value>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://192.168.1.189:3306/hive?createDatabaseIfNotExist=true</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>root</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>kafka1,kafka4,kafka3</value>
</property>
</configuration>
3.启动hiveserver服务
[root@kafka3 bin]# ./hiveserver2
命令行模式:
hive --service hiveserver2
服务模式:
./hiveserver2
4.测试连接:
不用写jdbc程序,运行 bin/beeline.sh
[root@kafka3 bin]# ./beeline
ls: 无法访问/opt/apache-hive-2.0.1-bin//lib/hive-jdbc-*-standalone.jar: 没有那个文件或目录
Beeline version 2.0.1 by Apache Hive
beeline> !connect jdbc:hive://192.168.1.187:10000 root root
scan complete in 1ms
scan complete in 7577ms
No known driver to handle "jdbc:hive://192.168.1.187:10000" ---不用hive,改用hive2
beeline>
找到了这个包
注意要讲hive的lib下的所有jar包都放到eclipse里面去
[root@kafka3 bin]# cp /opt/apache-hive-2.0.1-bin/jdbc/hive-jdbc-2.0.1-standalone.jar /opt/apache-hive-2.0.1-bin/lib/
beeline> !connect jdbc:hive2://192.168.1.187:10000
Connecting to jdbc:hive2://192.168.1.187:10000
Enter username for jdbc:hive2://192.168.1.187:10000: root
Enter password for jdbc:hive2://192.168.1.187:10000: root
beeline> !connect jdbc:hive2://192.168.1.187:10000
Connecting to jdbc:hive2://192.168.1.187:10000
Enter username for jdbc:hive2://192.168.1.187:10000: root
Enter password for jdbc:hive2://192.168.1.187:10000:
Enter password for jdbc:hive2://192.168.1.187:10000: Error: Failed to open new session: java.lang.RuntimeException:
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException):
User: root is not allowed to impersonate root (state=,code=0)
重启hadoop后,还是不行,但是报错内容换了。
在hadoop的core-site.xml中添加内容:
<property>
<name>hadoop.proxyuser.hadoop.hosts</name> --刚开始这里写错了一直不知道!不是hadoop用户,我是用的root用户
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hadoop.groups</name>
<value>root</value>
</property>
正确的是:
在hadoop的core-site.xml中添加内容:
<property>
<name>hadoop.proxyuser.root.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.root.groups</name>
<value>root</value>
</property>
beeline> !connect jdbc:hive2://192.168.1.187:10000
Connecting to jdbc:hive2://192.168.1.187:10000
Enter username for jdbc:hive2://192.168.1.187:10000: root
Enter password for jdbc:hive2://192.168.1.187:10000:
16/06/02 11:22:00 [main]: INFO jdbc.HiveConnection: Transport Used for JDBC connection: null
Error: Could not open client transport with JDBC Uri: jdbc:hive2://192.168.1.187:10000: java.net.ConnectException: 拒绝连接 (state=08S01,code=0)
把$HIVE_HOME/lib下的所有hive开头的jar包都拷贝过去
[root@kafka3 bin]# ./beeline
Beeline version 2.0.1 by Apache Hive --报错没有了
beeline>
开启hive的log
cd /opt/apache-hive-2.0.1-bin/conf
cp hive-log4j2.properties.template hive-log4j2.properties
vi hive-log4j2.properties
property.hive.log.dir = /hadoop/hive/log
property.hive.log.file = hive.log
[root@kafka3 log]# more hive.log
2016-06-03T10:20:16,883 INFO [main]: service.AbstractService (AbstractService.java:init(89)) - Service:OperationManager is inited.
2016-06-03T10:20:16,884 INFO [main]: service.AbstractService (AbstractService.java:init(89)) - Service:SessionManager is inited.
2016-06-03T10:20:16,884 INFO [main]: service.AbstractService (AbstractService.java:init(89)) - Service:CLIService is inited.
2016-06-03T10:20:16,884 INFO [main]: service.AbstractService (AbstractService.java:init(89)) - Service:ThriftBinaryCLIService is inited.
2016-06-03T10:20:16,884 INFO [main]: service.AbstractService (AbstractService.java:init(89)) - Service:HiveServer2 is inited.
2016-06-03T10:20:17,022 INFO [main]: service.AbstractService (AbstractService.java:start(104)) - Service:OperationManager is started.
2016-06-03T10:20:17,022 INFO [main]: service.AbstractService (AbstractService.java:start(104)) - Service:SessionManager is started.
2016-06-03T10:20:17,023 INFO [main]: service.AbstractService (AbstractService.java:start(104)) - Service:CLIService is started.
2016-06-03T10:20:17,023 INFO [main]: service.AbstractService (AbstractService.java:start(104)) - Service:ThriftBinaryCLIService is started.
2016-06-03T10:20:17,023 INFO [main]: service.AbstractService (AbstractService.java:start(104)) - Service:HiveServer2 is started.
2016-06-03T10:20:17,038 INFO [main]: server.Server (Server.java:doStart(252)) - jetty-7.6.0.v20120127
2016-06-03T10:20:17,064 INFO [main]: webapp.WebInfConfiguration (WebInfConfiguration.java:unpack(455)) - Extract jar:file:/opt/apache-hive-2.0.1-bin/lib/hive-jdbc-2.
0.1-standalone.jar!/hive-webapps/hiveserver2/ to /tmp/jetty-0.0.0.0-10002-hiveserver2-_-any-/webapp
2016-06-03T10:20:17,582 INFO [Thread-10]: thrift.ThriftCLIService (ThriftBinaryCLIService.java:run(100)) - Starting ThriftBinaryCLIService on port 10000 with 5...500
worker threads
2016-06-03T10:20:17,023 INFO [main]: service.AbstractService (AbstractService.java:start(104)) - Service:ThriftBinaryCLIService is started.
2016-06-03T10:20:17,023 INFO [main]: service.AbstractService (AbstractService.java:start(104)) - Service:HiveServer2 is started.
2016-06-03T10:20:17,038 INFO [main]: server.Server (Server.java:doStart(252)) - jetty-7.6.0.v20120127
2016-06-03T10:20:17,064 INFO [main]: webapp.WebInfConfiguration (WebInfConfiguration.java:unpack(455)) - Extract jar:file:/opt/apache-hive-2.0.1-bin/lib/hive-jdbc-2.0.1-standalone.jar!/hive-webapps/hiveserver2/ to /tmp/jetty-0.0.0.0-10002-hiveserver2-_-any-/webapp
2016-06-03T10:20:17,582 INFO [Thread-10]: thrift.ThriftCLIService (ThriftBinaryCLIService.java:run(100)) - Starting ThriftBinaryCLIService on port 10000 with 5...500 worker threads
2016-06-03T10:20:17,781 INFO [main]: handler.ContextHandler (ContextHandler.java:startContext(737)) - started o.e.j.w.WebAppContext{/,file:/tmp/jetty-0.0.0.0-10002-hiveserver2-_-any-/webapp/},jar:file:/opt/apache-hive-2.0.1-bin/lib/hive-jdbc-2.0.1-standalone.jar!/hive-webapps/hiveserver2
2016-06-03T10:20:17,827 INFO [main]: handler.ContextHandler (ContextHandler.java:startContext(737)) - started o.e.j.s.ServletContextHandler{/static,jar:file:/opt/apache-hive-2.0.1-bin/lib/hive-jdbc-2.0.1-standalone.jar!/hive-webapps/static}
2016-06-03T10:20:17,827 INFO [main]: handler.ContextHandler (ContextHandler.java:startContext(737)) - started o.e.j.s.ServletContextHandler{/logs,file:/hadoop/hive/log/}
2016-06-03T10:20:17,841 INFO [main]: server.AbstractConnector (AbstractConnector.java:doStart(333)) - Started SelectChannelConnector@0.0.0.0:10002
2016-06-03T10:20:17,841 INFO [main]: server.HiveServer2 (HiveServer2.java:start(438)) - Web UI has started on port 10002
网页可以打开,看到hiveserver2
http://192.168.1.187:10002/hiveserver2.jsp
1..通过日志,可以看到hiveserver2是正常开启的,但就是一直报错: User: root is not allowed to impersonate root
设置hadoop的core-site.xml<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/hadoop/tmp</value>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://192.168.1.187:9000</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>/hadoop/name</value>
</property>
<property>
<name>hadoop.proxyuser.root.hosts</name>
<value>192.168.1.187</value>
</property>
<property>
<name>hadoop.proxyuser.root.groups</name>
<value>root</value>
</property>
<property>
<name>fs.checkpoint.period</name>
<value>3600</value>
<description>The number of seconds between two periodic checkpoints.</description>
</property>
<property>
<name>fs.checkpoint.size</name>
<value>67108864</value>
</property>
<property>
<name>fs.checkpoint.dir</name>
<value>/hadoop/namesecondary</value>
</property>
</configuration>
搞了很久才发现,在187上对hadoop的core-site.xml做的修改,没有传到另外两个节点
2. 设置impersonation,这样hive server会以提交用户的身份去执行语句,如果设置为false,则会以起hive server daemon的admin user来执行语句
[html]
<property>
<name>hive.server2.enable.doAs</name>
<value>true</value>
</property>
3. JDBC方式
hive server 1的driver classname是org.apache.hadoop.hive.jdbc.HiveDriver,Hive Server 2的是org.apache.hive.jdbc.HiveDriver,这两个容易混淆。
[root@kafka3 bin]# hiveserver2 --终于成功啦!!
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/apache-hive-2.0.1-bin/lib/hive-jdbc-2.0.1-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop-2.6.2/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
OK
[root@kafka3 hadoop]# cd /opt/apache-hive-2.0.1-bin/bin
[root@kafka3 bin]# ./beeline
Beeline version 2.0.1 by Apache Hive
beeline> !connect jdbc:hive2://192.168.1.187:10000
Connecting to jdbc:hive2://192.168.1.187:10000
Enter username for jdbc:hive2://192.168.1.187:10000: root
Enter password for jdbc:hive2://192.168.1.187:10000:
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/apache-hive-2.0.1-bin/lib/hive-jdbc-2.0.1-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop-2.6.2/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Connected to: Apache Hive (version 2.0.1)
Driver: Hive JDBC (version 2.0.1)
16/06/03 15:44:19 [main]: WARN jdbc.HiveConnection: Request to set autoCommit to false; Hive does not support autoCommit=false.
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://192.168.1.187:10000>
0: jdbc:hive2://192.168.1.187:10000> show tables;
INFO : Compiling command(queryId=root_20160603154642_dd611020-8d3f-4abe-9bd5-7f2fda519007): show tables
INFO : Semantic Analysis Completed
INFO : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:tab_name, type:string, comment:from deserializer)], properties:null)
INFO : Completed compiling command(queryId=root_20160603154642_dd611020-8d3f-4abe-9bd5-7f2fda519007); Time taken: 0.291 seconds
INFO : Concurrency mode is disabled, not creating a lock manager
INFO : Executing command(queryId=root_20160603154642_dd611020-8d3f-4abe-9bd5-7f2fda519007): show tables
INFO : Starting task [Stage-0:DDL] in serial mode
INFO : Completed executing command(queryId=root_20160603154642_dd611020-8d3f-4abe-9bd5-7f2fda519007); Time taken: 0.199 seconds
INFO : OK
+---------------------------+--+
| tab_name |
+---------------------------+--+
| c2 |
| hbase_runningrecord_temp |
| rc_file |
| rc_file1 |
| runningrecord_old |
| sequence_file |
| studentinfo |
| t2 |
| test_table |
| test_table1 |
| tina |
+---------------------------+--+
11 rows selected (1.194 seconds)
0: jdbc:hive2://192.168.1.187:10000>
创建项目:hivecon
新建包:hivecon
新建类:testhive
package hivecon;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.Statement;
public class testhive {
public static void main(String[] args)throws Exception {
// TODO Auto-generated method stub
Class.forName("org.apache.hive.jdbc.HiveDriver");
Connection conn=DriverManager.getConnection("jdbc:hive2://192.168.1.187:10000","root","");
System.out.println("连接:"+conn);
Statement stmt=conn.createStatement();
//String tablename="";
String query_sql="select systemno from runningrecord_old limit 1";
ResultSet rs=stmt.executeQuery(query_sql);
System.out.println("是否有数据:"+rs.next());
}
}
可以直接执行:
ERROR StatusLogger Unrecognized format specifier [msg]
ERROR StatusLogger Unrecognized conversion specifier [msg] starting at position 54 in conversion pattern.
ERROR StatusLogger Unrecognized format specifier [n]
ERROR StatusLogger Unrecognized conversion specifier [n] starting at position 56 in conversion pattern. --日志的报错暂时忽略
连接:org.apache.hive.jdbc.HiveConnection@64485a47
是否有数据:false
---再添加一些操作:
package hivecon;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.Statement;
public class testhive {
private static String sql = "";
private static ResultSet res;
public static void main(String[] args)throws Exception {
// TODO Auto-generated method stub
Class.forName("org.apache.hive.jdbc.HiveDriver");
Connection conn=DriverManager.getConnection("jdbc:hive2://192.168.1.187:10000","root","");
System.out.println("连接:"+conn);
Statement stmt=conn.createStatement();
String query_sql="select systemno from runningrecord_old limit 1";
ResultSet rs=stmt.executeQuery(query_sql);
System.out.println("是否有数据:"+rs.next());
//创建的表名
String tableName = "tinatest";
/** 第一步:存在就先删除 **/
sql = "drop table " + tableName;
stmt.execute(sql);
/** 第二步:不存在就创建 **/
sql = "create table " + tableName + " (key int, value string) row format delimited fields terminated by ','";
stmt.execute(sql);
// 执行“show tables”操作
sql = "show tables '" + tableName + "'";
System.out.println("Running:" + sql);
res = stmt.executeQuery(sql);
System.out.println("执行“show tables”运行结果:");
if (res.next()) {
System.out.println(res.getString(1));
}
// 执行“describe table”操作
sql = "describe " + tableName;
System.out.println("Running:" + sql);
res = stmt.executeQuery(sql);
System.out.println("执行“describe table”运行结果:");
while (res.next()) {
System.out.println(res.getString(1) + "\t" + res.getString(2));
}
// 执行“load data into table”操作
String filepath = "/tmp/test2.txt";
sql = "load data local inpath '" + filepath + "' into table " + tableName;
System.out.println("Running:" + sql);
stmt.executeUpdate(sql);
// 执行“select * query”操作
sql = "select * from " + tableName;
System.out.println("Running:" + sql);
res = stmt.executeQuery(sql);
System.out.println("执行“select * query”运行结果:");
while (res.next()) {
System.out.println(res.getInt(1) + "\t" + res.getString(2));
}
conn.close();
conn = null;
}
}
--执行结果:
连接:org.apache.hive.jdbc.HiveConnection@64485a47
是否有数据:true
Running:show tables 'tinatest'
执行“show tables”运行结果:
tinatest
Running:describe tinatest
执行“describe table”运行结果:
key int
value string
Running:load data local inpath '/tmp/test2.txt' into table tinatest
Running:select * from tinatest
执行“select * query”运行结果:
1 a
2 b
3 tina
去hive里面验证:
hive> show tables;
OK
c2
hbase_runningrecord_temp
rc_file
rc_file1
runningrecord_old
sequence_file
studentinfo
t2
test_table
test_table1
tina
tinatest
Time taken: 0.065 seconds, Fetched: 12 row(s)
hive> select * from tinatest;
OK
1 a
2 b
3 tina
Time taken: 3.065 seconds, Fetched: 3 row(s)
发表评论
-
使用cloudera manager 安装CDH5
2016-06-13 16:13 20867使用cloudera manager安装cdh5 [root@ ... -
使用eclipse远程连接hbase
2016-06-13 16:03 1808基础环境: CDH 5.4.10 hadoop 2.6.0 ... -
使用eclipse远程连接hive---基于CDH5
2016-06-13 15:56 3178我已经用cloudera manager安装好了CDH5.4 ... -
hadoop学习笔记2
2016-05-19 16:09 1016.机架 rack--机架 一个block的三个副本通常会保 ... -
hadoop伪分布(单节点集群)安装测试
2016-05-18 15:43 20hadoop伪分布安装测试 因为测试环境有限,只有一台机器可 ...
相关推荐
轴类零件加工工艺设计.zip
资源内项目源码是来自个人的毕业设计,代码都测试ok,包含源码、数据集、可视化页面和部署说明,可产生核心指标曲线图、混淆矩阵、F1分数曲线、精确率-召回率曲线、验证集预测结果、标签分布图。都是运行成功后才上传资源,毕设答辩评审绝对信服的保底85分以上,放心下载使用,拿来就能用。包含源码、数据集、可视化页面和部署说明一站式服务,拿来就能用的绝对好资源!!! 项目备注 1、该资源内项目代码都经过测试运行成功,功能ok的情况下才上传的,请放心下载使用! 2、本项目适合计算机相关专业(如计科、人工智能、通信工程、自动化、电子信息等)的在校学生、老师或者企业员工下载学习,也适合小白学习进阶,当然也可作为毕设项目、课程设计、大作业、项目初期立项演示等。 3、如果基础还行,也可在此代码基础上进行修改,以实现其他功能,也可用于毕设、课设、作业等。 下载后请首先打开README.txt文件,仅供学习参考, 切勿用于商业用途。
seaborn基本绘图人力资源数据集
移动机器人(sw三维)
自制html网页源代码查看器
3吨叉车的液压系统设计().zip
1_实验三 扰码、卷积编码及交织.ppt
北京交通大学软件学院自命题科目考试大纲.pdf
雅鲁藏布江流域 shp矢量数据 (范围+DEM).zip
基于RUST的数据结构代码示例,栈、队列、图等
NIFD:2024Q1房地产金融报告
详细介绍及样例数据:https://blog.csdn.net/li514006030/article/details/146916652
【工业机器视觉定位软件Vision-Detect】基于C#的WPF与Halcon开发的工业机器视觉定位软件(整套源码),开箱即用 有用户登录,图片加载,模板创建,通讯工具,抓边抓圆,良率统计,LOG日志,异常管理,九点标定和流程加载保存等模块,功能不是很完善,适合初学者参考学习。 资源介绍请查阅:https://blog.csdn.net/m0_37302966/article/details/146912206 更多视觉框架资源:https://blog.csdn.net/m0_37302966/article/details/146583453
内容概要:本文档详细介绍了Java虚拟机(JVM)的相关知识点,涵盖Java内存模型、垃圾回收机制及算法、垃圾收集器、内存分配策略、虚拟机类加载机制和JVM调优等内容。首先阐述了Java代码的编译和运行过程,以及JVM的基本组成部分及其运行流程。接着深入探讨了JVM的各个运行时数据区,如程序计数器、Java虚拟机栈、本地方法栈、Java堆、方法区等的作用和特点。随后,文档详细解析了垃圾回收机制,包括GC的概念、工作原理、优点和缺点,并介绍了几种常见的垃圾回收算法。此外,文档还讲解了JVM的分代收集策略,新生代和老年代的区别,以及不同垃圾收集器的工作方式。最后,文档介绍了类加载机制、JVM调优的方法和工具,以及常用的JVM调优参数。 适合人群:具备一定Java编程基础的研发人员,尤其是希望深入了解JVM内部机制、优化程序性能的技术人员。 使用场景及目标:①帮助开发人员理解Java代码的编译和执行过程;②掌握JVM内存管理机制,包括内存分配、垃圾回收等;③熟悉类加载机制,了解类加载器的工作原理;④学会使用JVM调优工具,掌握常用调优参数,提升应用程序性能。 其他说明:本文档内容详尽,适合用作面试准备材料和技术学习资料,有助于提高开发人员对JVM的理解和应用能力。
Android项目原生java语言课程设计,包含LW+ppt
戴德梁行&中国房地产协会:2021亚洲房地产投资信托基金研究报告
Android项目原生java语言课程设计,包含LW+ppt
Thinkphp6.0+vue个人虚拟物品发卡网站源码 支持码支付对接 扫码自动发货 源码一共包含两个部分thinkphp6.0后端文件,以及vue前端文件.zip
《基于YOLOv8的食品冷链运输车厢门未锁闭预警系统》(包含源码、可视化界面、完整数据集、部署教程)简单部署即可运行。功能完善、操作简单,适合毕设或课程设计
资源内项目源码是来自个人的毕业设计,代码都测试ok,包含源码、数据集、可视化页面和部署说明,可产生核心指标曲线图、混淆矩阵、F1分数曲线、精确率-召回率曲线、验证集预测结果、标签分布图。都是运行成功后才上传资源,毕设答辩评审绝对信服的保底85分以上,放心下载使用,拿来就能用。包含源码、数据集、可视化页面和部署说明一站式服务,拿来就能用的绝对好资源!!! 项目备注 1、该资源内项目代码都经过测试运行成功,功能ok的情况下才上传的,请放心下载使用! 2、本项目适合计算机相关专业(如计科、人工智能、通信工程、自动化、电子信息等)的在校学生、老师或者企业员工下载学习,也适合小白学习进阶,当然也可作为毕设项目、课程设计、大作业、项目初期立项演示等。 3、如果基础还行,也可在此代码基础上进行修改,以实现其他功能,也可用于毕设、课设、作业等。 下载后请首先打开README.txt文件,仅供学习参考, 切勿用于商业用途。