- 浏览: 2566625 次
- 性别:
- 来自: 成都
-
文章分类
最新评论
-
nation:
你好,在部署Mesos+Spark的运行环境时,出现一个现象, ...
Spark(4)Deal with Mesos -
sillycat:
AMAZON Relatedhttps://www.godad ...
AMAZON API Gateway(2)Client Side SSL with NGINX -
sillycat:
sudo usermod -aG docker ec2-use ...
Docker and VirtualBox(1)Set up Shared Disk for Virtual Box -
sillycat:
Every Half an Hour30 * * * * /u ...
Build Home NAS(3)Data Redundancy -
sillycat:
3 List the Cron Job I Have>c ...
Build Home NAS(3)Data Redundancy
hadoop(1)Quick Guide to Hadoop on Ubuntu
The Apache Hadoop software library scale up from single servers to thousands of machines, each offering local computation and storage.
Subprojects:
Hadoop Common
Hadoop Distributed File System(HDFS): A distribute file system that provides high-throughput access to application data.
Hadoop MapReduce: A software framework for distributed processing of large data sets on compute clusters.
Other Hadoop-related projects:
Avro: A data serialization system
Cassandra: A scalable multi-master database with no single points of failure.
chukwa: A data collection system for managing large distributed systems.
HBase: A scalable, distributed database that supports structured data storage for large table.
Hive: A data warehouse infrastructure that provides data summarization and ad hoc querying.
Mahout: A scalable machine learning and data mining library.
Pig: A high-level data-flow language and execution framework for parallel computation.
ZooKeeper: A high-performance coordination service for distributed applications.
1. Single Node Setup
Quickly perform simple operations using Hadoop MapReduce and the Hadoop Distributed File System(HDFS).
We need to install http://www.cygwin.com/ in win7 first. Win7 is only for developing. But I use my virtual machine ubuntu.
Download the Hadoop release from http://mirror.cc.columbia.edu/pub/software/apache//hadoop/common/stable/. The file names are
hadoop-0.23.0-src.tar.gz and hadoop-0.23.0.tar.gz.
I decided to build it from the source file.
Install ProtocolBuffer on ubuntu.
download file from this URL http://protobuf.googlecode.com/files/protobuf-2.4.1.tar.gz
>wget http://protobuf.googlecode.com/files/protobuf-2.4.1.tar.gz
>tar zxvf protobuf-2.4.1.tar.gz
>cd protobuf-2.4.1
>sudo ./configure --prefix=/usr/local
>sudo make
>sudo make install
Install Hadoop Common
>svn checkout http://svn.apache.org/repos/asf/hadoop/common/tags/release-0.23.0/
>cd release-0.23.0
>mvn package -Pdist,native,docs,src -DskipTests -Dtar
error message:
org.apache.maven.reactor.MavenExecutionException: Failed to validate POM for project org.apache.hadoop:hadoop-project at /home/carl/download/release-0.23.0/hadoop-project/pom.xml
at org.apache.maven.DefaultMaven.getProjects(DefaultMaven.java:404)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:272)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138)
Solution:
try to install maven3 instead
>sudo apt-get remove maven2
>sudo apt-get autoremove maven2
>sudo apt-get install python-software-properties
>sudo add-apt-repository "deb http://build.discursive.com/apt/ lucid main"
>sudo apt-get update
>sudo apt-get install maven
add this to /etc/environment PATH column.
/usr/local/apache-maven-3.0.3/bin
>. /etc/environment
It is work now.
>mvn package -Pdist,native,docs,src -DskipTests -Dtar
Fail, and I check the BUILDING.txt file and get this:
* Unix System
* JDK 1.6
* Maven 3.0
* Forrest 0.8 (if generating docs)
* Findbugs 1.3.9 (if running findbugs)
* ProtocolBuffer 2.4.1+ (for MapReduce)
* Autotools (if compiling native code)
* Internet connection for first build (to fetch all Maven and Hadoop dependencies)
Install Forrest on Ubuntu
http://forrest.apache.org/
>wget http://mirrors.200p-sf.sonic.net/apache//forrest/apache-forrest-0.9.tar.gz
>tar zxvf apache-forrest-0.9.tar.gz
>sudo mv apache-forrest-0.9 /usr/local/
>sudo vi /etc/environment
add path /usr/local/apache-forrest-0.9/bin in PATH
>. /etc/environment
Install Autotools in Ubuntu
>sudo apt-get install build-essential g++ automake autoconf gnu-standards autoconf-doc libtool gettext autoconf-archive
build the hadoop again
>mvn package -Pdist -DskipTests=true -Dtar
Build success. I can get the file /home/carl/download/release-0.23.0/hadoop-dist/target/hadoop-0.23.0-SNAPSHOT.tar.gz
Make sure ssh and rsync are on my system.
>sudo apt-get install ssh
>sudo apt-get install rsync
Unpack the hadoop distribution.
>tar zxvf hadoop-0.23.0-SNAPSHOT.tar.gz
>sudo mv hadoop-0.23.0-SNAPSHOT /usr/local/
>cd /usr/local/
>sudo mv hadoop-0.23.0-SNAPSHOT hadoop-0.23.0
>cd hadoop-0.23/conf/
>vi hadoop-env.sh
modify the line of JAVA_HOME to following statement
JAVA_HOME=/usr/lib/jvm/java-6-sun
check the hadoop command
>bin/hadoop version
Hadoop 0.23.0-SNAPSHOT
Subversion http://svn.apache.org/repos/asf/hadoop/common/tags/release-0.23.0/hadoop-common-project/hadoop-common -r 1196973
Compiled by carl on Wed Nov 30 02:32:31 EST 2011
From source with checksum 4e42b2d96c899a98a8ab8c7cc23f27ae
There are 3 modes:
Local (Standalone) Mode
Pseudo-Distributed Mode
Fully-Distributed Mode
Standalone Operation
>mkdir input
>cp conf/*.xml input
>vi input/1.xml
YARNtestforfun
>bin/hadoop jar hadoop-mapreduce-examples-0.23.0.jar grep input output 'YARN[a-zA-Z.]+'
>cat output/*
1 YARNtestforfun
Pseudo-Distributed Operation
Hadoop can also be run on a single-node in a pseudo-distributed mode where each Hadoop daemon runs in a separate Java process.
Configuration
conf/core-site.xml:
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
conf/hdfs-site.xml:
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
conf/mapred-site.xml:
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
</configuration>
Setup passphraseless ssh
>ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
>cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
>ssh localhost
Then I can ssh connect to the localhost without password.
Execution
Format a new distributed-filesystem:
>bin/hadoop namenode -format
Start the hadoop daemons:
>bin/start-all.sh
The logs go there ${HADOOP_HOME}/logs. /usr/local/hadoop-0.23.0/logs/yarn-carl-nodemanager-ubuntus.out. And the error messages are as following:
No HADOOP_CONF_DIR set.
Please specify it either in yarn-env.sh or in the environment.
solution:
>sudo vi yarn-env.sh
>sudo vi /etc/environment
>sudo vi hadoop-env.sh
add one line:
HADOOP_CONF_DIR=/usr/local/hadoop-0.23.0/conf
HADOOP_COMMON_HOME=/usr/local/hadoop-0.23.0/share/hadoop/common
HADOOP_HDFS_HOME=/usr/local/hadoop-0.23.0/share/hadoop/hdfs
>bin/start-all.sh
http://192.168.56.101:9999/node
http://192.168.56.101:8088/cluster
change the configuration files, comment all the other xml files in conf directory.
>vi conf/yarn-site.xml
<?xml version="1.0"?>
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
Find something different with the latest version 0.23.0. So I need to do some changes according to another guide.
references:
http://guoyunsky.iteye.com/category/186934
http://hadoop.apache.org/common/docs/r0.19.2/cn/quickstart.html
http://hadoop.apache.org/
http://guoyunsky.iteye.com/blog/1233707
http://hadoop.apache.org/common/
http://hadoop.apache.org/common/docs/stable/single_node_setup.html
http://www.blogjava.net/shenh062326/archive/2011/11/10/yuling_hadoop_0-23_compile.html
http://sillycat.iteye.com/blog/965534
http://www.cloudera.com/blog/2009/08/hadoop-default-ports-quick-reference/
http://hadoop.apache.org/common/docs/r0.23.0/hadoop-yarn/hadoop-yarn-site/SingleCluster.html
http://hadoop.apache.org/common/docs/r0.23.0/hadoop-yarn/hadoop-yarn-site/ClusterSetup.html
The Apache Hadoop software library scale up from single servers to thousands of machines, each offering local computation and storage.
Subprojects:
Hadoop Common
Hadoop Distributed File System(HDFS): A distribute file system that provides high-throughput access to application data.
Hadoop MapReduce: A software framework for distributed processing of large data sets on compute clusters.
Other Hadoop-related projects:
Avro: A data serialization system
Cassandra: A scalable multi-master database with no single points of failure.
chukwa: A data collection system for managing large distributed systems.
HBase: A scalable, distributed database that supports structured data storage for large table.
Hive: A data warehouse infrastructure that provides data summarization and ad hoc querying.
Mahout: A scalable machine learning and data mining library.
Pig: A high-level data-flow language and execution framework for parallel computation.
ZooKeeper: A high-performance coordination service for distributed applications.
1. Single Node Setup
Quickly perform simple operations using Hadoop MapReduce and the Hadoop Distributed File System(HDFS).
We need to install http://www.cygwin.com/ in win7 first. Win7 is only for developing. But I use my virtual machine ubuntu.
Download the Hadoop release from http://mirror.cc.columbia.edu/pub/software/apache//hadoop/common/stable/. The file names are
hadoop-0.23.0-src.tar.gz and hadoop-0.23.0.tar.gz.
I decided to build it from the source file.
Install ProtocolBuffer on ubuntu.
download file from this URL http://protobuf.googlecode.com/files/protobuf-2.4.1.tar.gz
>wget http://protobuf.googlecode.com/files/protobuf-2.4.1.tar.gz
>tar zxvf protobuf-2.4.1.tar.gz
>cd protobuf-2.4.1
>sudo ./configure --prefix=/usr/local
>sudo make
>sudo make install
Install Hadoop Common
>svn checkout http://svn.apache.org/repos/asf/hadoop/common/tags/release-0.23.0/
>cd release-0.23.0
>mvn package -Pdist,native,docs,src -DskipTests -Dtar
error message:
org.apache.maven.reactor.MavenExecutionException: Failed to validate POM for project org.apache.hadoop:hadoop-project at /home/carl/download/release-0.23.0/hadoop-project/pom.xml
at org.apache.maven.DefaultMaven.getProjects(DefaultMaven.java:404)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:272)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138)
Solution:
try to install maven3 instead
>sudo apt-get remove maven2
>sudo apt-get autoremove maven2
>sudo apt-get install python-software-properties
>sudo add-apt-repository "deb http://build.discursive.com/apt/ lucid main"
>sudo apt-get update
>sudo apt-get install maven
add this to /etc/environment PATH column.
/usr/local/apache-maven-3.0.3/bin
>. /etc/environment
It is work now.
>mvn package -Pdist,native,docs,src -DskipTests -Dtar
Fail, and I check the BUILDING.txt file and get this:
* Unix System
* JDK 1.6
* Maven 3.0
* Forrest 0.8 (if generating docs)
* Findbugs 1.3.9 (if running findbugs)
* ProtocolBuffer 2.4.1+ (for MapReduce)
* Autotools (if compiling native code)
* Internet connection for first build (to fetch all Maven and Hadoop dependencies)
Install Forrest on Ubuntu
http://forrest.apache.org/
>wget http://mirrors.200p-sf.sonic.net/apache//forrest/apache-forrest-0.9.tar.gz
>tar zxvf apache-forrest-0.9.tar.gz
>sudo mv apache-forrest-0.9 /usr/local/
>sudo vi /etc/environment
add path /usr/local/apache-forrest-0.9/bin in PATH
>. /etc/environment
Install Autotools in Ubuntu
>sudo apt-get install build-essential g++ automake autoconf gnu-standards autoconf-doc libtool gettext autoconf-archive
build the hadoop again
>mvn package -Pdist -DskipTests=true -Dtar
Build success. I can get the file /home/carl/download/release-0.23.0/hadoop-dist/target/hadoop-0.23.0-SNAPSHOT.tar.gz
Make sure ssh and rsync are on my system.
>sudo apt-get install ssh
>sudo apt-get install rsync
Unpack the hadoop distribution.
>tar zxvf hadoop-0.23.0-SNAPSHOT.tar.gz
>sudo mv hadoop-0.23.0-SNAPSHOT /usr/local/
>cd /usr/local/
>sudo mv hadoop-0.23.0-SNAPSHOT hadoop-0.23.0
>cd hadoop-0.23/conf/
>vi hadoop-env.sh
modify the line of JAVA_HOME to following statement
JAVA_HOME=/usr/lib/jvm/java-6-sun
check the hadoop command
>bin/hadoop version
Hadoop 0.23.0-SNAPSHOT
Subversion http://svn.apache.org/repos/asf/hadoop/common/tags/release-0.23.0/hadoop-common-project/hadoop-common -r 1196973
Compiled by carl on Wed Nov 30 02:32:31 EST 2011
From source with checksum 4e42b2d96c899a98a8ab8c7cc23f27ae
There are 3 modes:
Local (Standalone) Mode
Pseudo-Distributed Mode
Fully-Distributed Mode
Standalone Operation
>mkdir input
>cp conf/*.xml input
>vi input/1.xml
YARNtestforfun
>bin/hadoop jar hadoop-mapreduce-examples-0.23.0.jar grep input output 'YARN[a-zA-Z.]+'
>cat output/*
1 YARNtestforfun
Pseudo-Distributed Operation
Hadoop can also be run on a single-node in a pseudo-distributed mode where each Hadoop daemon runs in a separate Java process.
Configuration
conf/core-site.xml:
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
conf/hdfs-site.xml:
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
conf/mapred-site.xml:
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
</configuration>
Setup passphraseless ssh
>ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
>cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
>ssh localhost
Then I can ssh connect to the localhost without password.
Execution
Format a new distributed-filesystem:
>bin/hadoop namenode -format
Start the hadoop daemons:
>bin/start-all.sh
The logs go there ${HADOOP_HOME}/logs. /usr/local/hadoop-0.23.0/logs/yarn-carl-nodemanager-ubuntus.out. And the error messages are as following:
No HADOOP_CONF_DIR set.
Please specify it either in yarn-env.sh or in the environment.
solution:
>sudo vi yarn-env.sh
>sudo vi /etc/environment
>sudo vi hadoop-env.sh
add one line:
HADOOP_CONF_DIR=/usr/local/hadoop-0.23.0/conf
HADOOP_COMMON_HOME=/usr/local/hadoop-0.23.0/share/hadoop/common
HADOOP_HDFS_HOME=/usr/local/hadoop-0.23.0/share/hadoop/hdfs
>bin/start-all.sh
http://192.168.56.101:9999/node
http://192.168.56.101:8088/cluster
change the configuration files, comment all the other xml files in conf directory.
>vi conf/yarn-site.xml
<?xml version="1.0"?>
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
Find something different with the latest version 0.23.0. So I need to do some changes according to another guide.
references:
http://guoyunsky.iteye.com/category/186934
http://hadoop.apache.org/common/docs/r0.19.2/cn/quickstart.html
http://hadoop.apache.org/
http://guoyunsky.iteye.com/blog/1233707
http://hadoop.apache.org/common/
http://hadoop.apache.org/common/docs/stable/single_node_setup.html
http://www.blogjava.net/shenh062326/archive/2011/11/10/yuling_hadoop_0-23_compile.html
http://sillycat.iteye.com/blog/965534
http://www.cloudera.com/blog/2009/08/hadoop-default-ports-quick-reference/
http://hadoop.apache.org/common/docs/r0.23.0/hadoop-yarn/hadoop-yarn-site/SingleCluster.html
http://hadoop.apache.org/common/docs/r0.23.0/hadoop-yarn/hadoop-yarn-site/ClusterSetup.html
发表评论
-
Update Site will come soon
2021-06-02 04:10 1694I am still keep notes my tech n ... -
Hadoop Docker 2019 Version 3.2.1
2019-12-10 07:39 310Hadoop Docker 2019 Version 3.2. ... -
Nginx and Proxy 2019(1)Nginx Enable Lua and Parse JSON
2019-12-03 04:17 471Nginx and Proxy 2019(1)Nginx En ... -
Data Solution 2019(13)Docker Zeppelin Notebook and Memory Configuration
2019-11-09 07:15 316Data Solution 2019(13)Docker Ze ... -
Data Solution 2019(10)Spark Cluster Solution with Zeppelin
2019-10-29 08:37 265Data Solution 2019(10)Spark Clu ... -
AMAZON Kinesis Firehose 2019(1)Firehose Buffer to S3
2019-10-01 10:15 341AMAZON Kinesis Firehose 2019(1) ... -
Rancher and k8s 2019(3)Clean Installation on CentOS7
2019-09-19 23:25 336Rancher and k8s 2019(3)Clean In ... -
Pacemaker 2019(1)Introduction and Installation on CentOS7
2019-09-11 05:48 360Pacemaker 2019(1)Introduction a ... -
Crontab-UI installation and Introduction
2019-08-30 05:54 472Crontab-UI installation and Int ... -
Spiderkeeper 2019(1)Installation and Introduction
2019-08-29 06:49 534Spiderkeeper 2019(1)Installatio ... -
Supervisor 2019(2)Ubuntu and Multiple Services
2019-08-19 10:53 387Supervisor 2019(2)Ubuntu and Mu ... -
Supervisor 2019(1)CentOS 7
2019-08-19 09:33 343Supervisor 2019(1)CentOS 7 Ins ... -
Redis Cluster 2019(3)Redis Cluster on CentOS
2019-08-17 04:07 384Redis Cluster 2019(3)Redis Clus ... -
Amazon Lambda and Version Limit
2019-08-02 01:42 450Amazon Lambda and Version Limit ... -
MySQL HA Solution 2019(1)Master Slave on MySQL 5.7
2019-07-27 22:26 550MySQL HA Solution 2019(1)Master ... -
RabbitMQ Cluster 2019(2)Cluster HA and Proxy
2019-07-11 12:41 474RabbitMQ Cluster 2019(2)Cluster ... -
Running Zeppelin with Nginx Authentication
2019-05-25 21:35 334Running Zeppelin with Nginx Aut ... -
Running Zeppelin with Nginx Authentication
2019-05-25 21:34 335Running Zeppelin with Nginx Aut ... -
ElasticSearch(3)Version Upgrade and Cluster
2019-05-20 05:00 340ElasticSearch(3)Version Upgrade ... -
Jetty Server and Cookie Domain Name
2019-04-28 23:59 420Jetty Server and Cookie Domain ...
相关推荐
1. **下载CDH4**:从Cloudera官网下载最新版本的CDH4安装包。 2. **解压安装包**:使用tar命令解压缩下载的安装包。 3. **配置Hadoop环境**:编辑Hadoop的配置文件(如core-site.xml、hdfs-site.xml等),配置必要的...
哈希表源码
sun_3ck_03_0119
内容概要:本文档详细介绍了基于 MATLAB 实现的 LSTM-AdaBoost 时间序列预测模型,涵盖项目背景、目标、挑战、特点、应用领域以及模型架构和代码示例。随着大数据和AI的发展,时间序列预测变得至关重要。传统方法如 ARIMA 在复杂非线性序列中表现欠佳,因此引入了 LSTM 来捕捉长期依赖性。但 LSTM 存在易陷局部最优、对噪声鲁棒性差的问题,故加入 AdaBoost 提高模型准确性和鲁棒性。两者结合能更好应对非线性和长期依赖的数据,提供更稳定的预测。项目还展示了如何在 MATLAB 中具体实现模型的各个环节。 适用人群:对时间序列预测感兴趣的开发者、研究人员及学生,特别是有一定 MATLAB 编程经验和熟悉深度学习或机器学习基础知识的人群。 使用场景及目标:①适用于金融市场价格预测、气象预报、工业生产故障检测等多种需要时间序列分析的场合;②帮助使用者理解并掌握将LSTM与AdaBoost结合的实现细节及其在提高预测精度和抗噪方面的优势。 其他说明:尽管该模型有诸多优点,但仍存在训练时间长、计算成本高等挑战。文中提及通过优化数据预处理、调整超参数等方式改进性能。同时给出了完整的MATLAB代码实现,便于学习与复现。
1996-2019年各地级市平均工资数据 1、时间:1996-2019年 2、来源:城市nj、各地级市统计j 3、指标:平均工资(在岗职工) 4、范围:295个地级市
AB PLC例程代码项目案例 【备注】 1、该资源内项目代码都经过测试运行成功,功能ok的情况下才上传的,请放心下载使用!有问题请及时沟通交流。 2、适用人群:计算机相关专业(如计科、信息安全、数据科学与大数据技术、人工智能、通信、物联网、自动化、电子信息等)在校学生、专业老师或者企业员工下载使用。 3、用途:项目具有较高的学习借鉴价值,不仅适用于小白学习入门进阶。也可作为毕设项目、课程设计、大作业、初期项目立项演示等。 4、如果基础还行,或热爱钻研,亦可在此项目代码基础上进行修改添加,实现其他不同功能。 欢迎下载!欢迎交流学习!不清楚的可以私信问我!
内容概要:本文介绍了一种新颖的变压器模型C2Former(Calibrated and Complementary Transformer),专门用于解决RGB图像和红外图像之间的物体检测难题。传统方法在进行多模态融合时面临两个主要问题——模态错位(Modality miscalibration)和融合不准确(fusion imprecision)。作者针对这两个问题提出采用互模交叉注意力模块(Inter-modality Cross-Attention, ICA)以及自适应特征采样模块(Adaptive Feature Sampling, AFS)来改善。具体来说,ICA可以获取对齐并且互补的特性,在特征层面进行更好的整合;而AFS则减少了计算成本。通过实验验证了基于C2Former的一阶段和二阶段检测器均能在现有公开数据集上达到最先进的表现。 适合人群:计算机视觉领域的研究人员和技术人员,特别是从事跨模态目标检测的研究人员,对Transformer架构有一定了解的开发者。 使用场景及目标:适用于需要将可见光和热成像传感器相结合的应用场合,例如全天候的视频监控系统、无人驾驶汽车、无人
上海人工智能实验室:金融大模型应用评测报告-摘要版2024.pdf
malpass_02_0907
C++-自制学习辅助工具
内容概要:本文提供了有关微信生态系统的综合开发指导,具体涵盖了微信机器人的Java与Python开发、全套及特定应用的小程序源码(PHP后台、DeepSeek集成),以及微信公众号的基础开发与智能集成方法。文中不仅给出了各种应用的具体案例和技术要点如图灵API对接、DeepSeek大模型接入等的简述,还指出了相关资源链接以便深度探究或直接获取源码进行开发。 适合人群:有意开发微信应用程序或提升相应技能的技术爱好者和专业人士。不论是初涉者寻求基本理解和操作流程,还是进阶者期望利用提供的资源进行项目构建或是研究。 使用场景及目标:开发者能够根据自身兴趣选择不同方向深入学习微信平台的应用创建,如社交自动化(机器人)、移动互联网服务交付(小程序),或者公众信息服务(公众号)。特别是想要尝试引入AI能力到应用中的人士,文中介绍的内容非常有价值。 其他说明:文中提及的多个项目都涉及到了最新技术栈(如DeepSeek大模型),并且为不同层次的学习者提供从零开始的详细资料。对于那些想要迅速获得成果同时深入了解背后原理的人来说是个很好的起点。
pimpinella_3cd_01_0916
mellitz_3cd_01_0516
schube_3cd_01_0118
AB PLC例程代码项目案例 【备注】 1、该资源内项目代码都经过测试运行成功,功能ok的情况下才上传的,请放心下载使用!有问题请及时沟通交流。 2、适用人群:计算机相关专业(如计科、信息安全、数据科学与大数据技术、人工智能、通信、物联网、自动化、电子信息等)在校学生、专业老师或者企业员工下载使用。 3、用途:项目具有较高的学习借鉴价值,不仅适用于小白学习入门进阶。也可作为毕设项目、课程设计、大作业、初期项目立项演示等。 4、如果基础还行,或热爱钻研,亦可在此项目代码基础上进行修改添加,实现其他不同功能。 欢迎下载!欢迎交流学习!不清楚的可以私信问我!
AB PLC例程代码项目案例 【备注】 1、该资源内项目代码都经过测试运行成功,功能ok的情况下才上传的,请放心下载使用!有问题请及时沟通交流。 2、适用人群:计算机相关专业(如计科、信息安全、数据科学与大数据技术、人工智能、通信、物联网、自动化、电子信息等)在校学生、专业老师或者企业员工下载使用。 3、用途:项目具有较高的学习借鉴价值,不仅适用于小白学习入门进阶。也可作为毕设项目、课程设计、大作业、初期项目立项演示等。 4、如果基础还行,或热爱钻研,亦可在此项目代码基础上进行修改添加,实现其他不同功能。 欢迎下载!欢迎交流学习!不清楚的可以私信问我!
AB PLC例程代码项目案例 【备注】 1、该资源内项目代码都经过测试运行成功,功能ok的情况下才上传的,请放心下载使用!有问题请及时沟通交流。 2、适用人群:计算机相关专业(如计科、信息安全、数据科学与大数据技术、人工智能、通信、物联网、自动化、电子信息等)在校学生、专业老师或者企业员工下载使用。 3、用途:项目具有较高的学习借鉴价值,不仅适用于小白学习入门进阶。也可作为毕设项目、课程设计、大作业、初期项目立项演示等。 4、如果基础还行,或热爱钻研,亦可在此项目代码基础上进行修改添加,实现其他不同功能。 欢迎下载!欢迎交流学习!不清楚的可以私信问我!
智慧用电平台建设解决方案【28页】
lusted_3ck_01_0519