搭建Spark环境后,调测Spark样例时,出现下面的错误:
WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
[hadoop@gpmaster bin]$ ./run-example org.apache.spark.examples.SparkPi
15/10/01 08:59:33 INFO spark.SparkContext: Running Spark version 1.5.0
.......................
15/10/01 08:59:35 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@192.168.1.128:17514]
15/10/01 08:59:35 INFO util.Utils: Successfully started service ‘sparkDriver‘ on port 17514.
.......................
15/10/01 08:59:36 INFO ui.SparkUI: Started SparkUI at http://192.168.1.128:4040
15/10/01 08:59:37 INFO spark.SparkContext: Added JAR file:/home/hadoop/spark/lib/spark-examples-1.5.0-hadoop2.6.0.jar at http://192.168.1.128:36471/jars/spark-examples-1.5.0-hadoop2.6.0.jar with timestamp 1443661177865
15/10/01 08:59:37 WARN metrics.MetricsSystem: Using default name DAGScheduler for source because spark.app.id is not set.
15/10/01 08:59:38 INFO client.AppClient$ClientEndpoint: Connecting to master spark://192.168.1.128:7077...
15/10/01 08:59:38 INFO cluster.SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20151001085938-0000
.................................
15/10/01 08:59:40 INFO scheduler.TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
15/10/01 08:59:55 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/10/01 09:00:10 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/10/01 09:00:25 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/10/01 09:00:40 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/10/01 09:00:55 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/10/01 09:01:10 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/10/01 09:01:25 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/10/01 09:01:40 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/10/01 09:01:55 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/10/01 09:02:10 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/10/01 09:02:25 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
从警告信息大致可以知道:
初始化job时没有获取到任何资源;提示检查集群,确保workers可以被注册并有足够的内存资源。
可能的原因有几点,可以逐个排查:
1. 主机主机名和ip是否配置正确
先查看/etc/hosts文件配置是否正确
同时可以通过spark-shell查看SparkContext获取的上下文信息, 如下操作:
[hadoop@gpmaster bin]$ ./spark-shell
........
scala> sc.getConf.getAll.foreach(println)
(spark.fileserver.uri,http://192.168.1.128:34634)
(spark.app.name,Spark shell)
(spark.driver.port,25392)
(spark.app.id,app-20151001090322-0001)
(spark.repl.class.uri,http://192.168.1.128:24988)
(spark.externalBlockStore.folderName,spark-1254a794-fbfa-4b4c-9757-b5a94dc26ffc)
(spark.jars,)
(spark.executor.id,driver)
(spark.submit.deployMode,client)
(spark.driver.host,192.168.1.128)
(spark.master,spark://192.168.1.128:7077)
scala> sc.getConf.toDebugString
res8: String =
spark.app.id=app-20151001090322-0001
spark.app.name=Spark shell
spark.driver.host=192.168.1.128
spark.driver.port=25392
spark.executor.id=driver
spark.externalBlockStore.folderName=spark-1254a794-fbfa-4b4c-9757-b5a94dc26ffc
spark.fileserver.uri=http://192.168.1.128:34634
spark.jars=
spark.master=spark://192.168.1.128:7077
spark.repl.class.uri=http://192.168.1.128:24988
spark.submit.deployMode=client
2. 内存不足
我的环境就是因为内存的原因。
我集群环境中,spark-env.sh 文件配置如下:
export JAVA_HOME=/usr/java/jdk1.7.0_60
export SCALA_HOME=/usr/local/scala
export SPARK_MASTER_IP=192.168.1.128
export SPARK_WORKER_MEMORY=100m
export HADOOP_CONF_DIR=/home/hadoop/hadoop-2.6.0/etc/hadoop
export MASTER=spark://192.168.1.128:7077
因为我的集群环境,每个节点只剩下500MB内存了,由于我没有配置SPARK_EXECUTOR_MEMORY参数,默认会使用1G内存,所以会出现内存不足,从而出现上面日志报的警告信息。
所以解决办法是添加如下参数:
export SPARK_EXECUTOR_MEMORY=512m
3.端口号被占用,之前的程序已运行。
原文:http://www.cnblogs.com/snowbook/p/5831473.html
相关推荐
scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your clus ter UI to ensure that workers are registered and have sufficient resources 调度器:初始化Job时没有足够的资源,...
解决此类问题,你需要确保以下几个步骤已正确执行: 1. **检查依赖**:确认你的项目已经正确地包含了Ezmorph库。如果你使用的是Maven或Gradle等构建工具,检查pom.xml或build.gradle文件中是否有Ezmorph的依赖条目...
- "He has already received money from three aunts." (他已经收到三个姑妈的钱了。) 3. **后接 in 的动词** - `believe in`:信任 - `delight in`:喜欢 - `employ(ed) in`:从事 - `encourage in`:鼓励 -...
而针对Windows用户,开发和运行Hadoop应用时会遇到一个常见问题:“Did not find winutils.exe: java.io.FileNotFoundException”。这个错误是由于Hadoop在Windows环境下找不到`winutils.exe`文件所引起的。`...
NULL 博文链接:https://wait7758521.iteye.com/blog/1933964
hadoop2.6版本的dll,网上比较多的是2.2的dll,如果2.6版本用2.2的dll,会报hadoop 2.6 UnsatisfiedLinkError: org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSumsByteArra 错误,因为2.2的dll中...
2013-08-12 14:33:37.672:WARN::Nested in org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'sqlSessionFactory' defined in URL [file:/E:/cloudwave-core/src/main/...
【标题】"Baomidou-DynamicMaster Java 源码" 是一个专为SpringBoot设计的多数据源动态数据源管理框架,它旨在帮助开发者快速构建支持主从分离和分布式事务的应用程序。该框架的核心功能是提供对不同数据库源的灵活...
信息: The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: C:\Program Files\Java\jdk1.6.0_10\bin;C:\Program ...
FANUC机器人PROG程序相关报警代码及解释说明 FANUC机器人PROG程序相关报警代码是一组预定义的错误代码,用于描述FANUC机器人PROG程序在执行过程中可能遇到的错误或警告情况。...PROG-043:WARN... (以下内容省略)
- "Not only...but also...":不仅...而且... 2. **account构成的短语**: - "account for" 表示“占据,占比例;解释,说明”。 - 练习题中的填空要求用到"account for"的含义: - "account for":占,解释 -...
如果出现如下bug:“Could not locate executable null\bin\winutils.exe in the Hadoop binaries”,则下载该文件,放入hadoop的bin文件夹下,并设置环境变量HADOOP_HOME:F:\hadoop2.7.x即可。
搭建hadoop 环境时遇到启动异常告警问题 “WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable” 上来不多说,百度收集些相关...
* /usr/local/httpd/bin:执行文件存放路径 * conf:配置文件存放路径 * htdocs:网页文档存放路径 * logs:存放日志文件路径 * modules:模块存放路径 * cgi-bin:CGI 程序存放路径 四、Apache 的配置参数 Apache...
- "warn sb. not to do sth."结构表示警告某人不要做某事,所以应改为"warned you not to do that again"。 - "be astonished to do sth."表示对做某事感到惊讶,所以应该是"astonished to find the truth"。 最后...
11:56:49.606:WARN:oejuc.AbstractLifeCycle:main: FAILED org.eclipse.jetty.server.Server@7c1eb0b: java.util.ServiceConfigurationError: org.apache.juli.logging.Log: Provider org .eclipse.jetty.apache.jsp...
macOS下使用hadoop2.8.1时, 执行hadoop fs 命令(如:hadoop fs -ls /tmp/input)会提示: WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where ...
如果eclipse打印不出日志,在控制台上只显示 1.log4j:WARNNoappenderscouldbefoundforlogger(org.apache.hadoop.util.Shell). 2.log4j:WARNPleaseinitializethelog4jsystemproperly. ...就本文件拷贝到src目录下即可。
- `warn`: 警告,`warn sb. of sth.` 警告某人某事。 - `measure`: 测量。 - `injure`: 伤害,受伤。 - `imitate`: 模仿。 - `deliver`: 递送,发表。 - `complicated`: 复杂的。 - `service`: 服务,提供...
原错误提示为:12/04/24 15:32:44 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 12/04/24 15:32:44 ERROR security....