`
sillycat
  • 浏览: 2542620 次
  • 性别: Icon_minigender_1
  • 来自: 成都
社区版块
存档分类
最新评论

DevOps(5)Spark Deployment on VM

 
阅读更多
DevOps(5)Spark Deployment on VM
1. Old Environment
1.1 Jdk
java version "1.6.0_45"
Switch version on ubuntu system.
>sudo update-alternatives --config java
 
Set up ubuntu JAVA_HOME
>vi ~/.profile
export JAVA_HOME="/usr/lib/jvm/java-6-oracle"
 
Java Compile Version Problem
[warn] Error reading API from class file : java.lang.UnsupportedClassVersionError: com/digby/localpoint/auth/util/Base64$OutputStream : Unsupported major.minor version 51.0
 
>sudo update-alternatives --config java
>sudo update-alternatives --config javac
 
1.2 Cassandra
cassandra 1.2.13 version
 
> sudo mkdir -p /var/log/cassandra
> sudo chown -R carl /var/log/cassandra
carl is my username
> sudo mkdir -p /var/lib/cassandra
> sudo chown -R carl /var/lib/cassandra
 
Change the config if needed, start the cassandra single mode
> cassandra -f conf/cassandra.yaml 
 
Test that from client
> cassandra-cli -host ubuntu-dev1 -port 9160
 
Setup the multiple nodes, Config changes
listen_address: ubuntu-dev1
    - class_name: org.apache.cassandra.locator.SimpleSeedProvider
      parameters:
          - seeds: "ubuntu-dev1,ubuntu-dev2"
 
Change that on both nodes on ubuntu-dev1, ubuntu-dev2.
Start the 2 nodes in backend
> nohup cassandra -f conf/cassandra.yaml &
 
Verify that the cluster is working
> nodetool -h ubuntu-dev1 ring
Datacenter: datacenter1
==========
Address         Rack        Status State   Load            Owns                Token                                      
                                                                               7068820527558753619                        
10.190.191.195  rack1       Up     Normal  132.34 KB       36.12%              -4714763636920163240                       
10.190.190.190  rack1       Up     Normal  65.18 KB        63.88%              7068820527558753619   
 
1.3 Spark
I am choosing this old version.
spark-0.9.0-incubating-bin-hadoop1.tgz
 
Place that in the right place.
Set up the access across among the masters and slaves.
On Master
> ssh-keygen -t rsa
> cat ~/.ssh/id_rsa.pub
 
On slave
> mkdir ~/.ssh
> vi ~/.ssh/authorized_keys
Put the public key from rsa.pub
 
Config the Spark file here /opt/spark/conf/spark-env.sh
SCALA_HOME=/opt/scala/scala-2.10.3
SPARK_WORKER_MEMORY=512m
#SPARK_CLASSPATH='/opt/localpoint-profiles-spark/*jar'
#SPARK_JAVA_OPTS="-Dbuild.env=lmm.sdprod"
USER=carl
 
/opt/spark/conf/slaves
ubuntu-dev1
ubuntu-dev2
 
Command to start the Spark Server
>sbin/start-all.sh
 
Spark single mode Command
>java -Dbuild.env=sillycat.dev cp /opt/YOU_PROJECT/lib/*.jar com.sillycat.YOUR_CLASS
 
>java -Dbuild.env=sillycat.dev -Dsparkcontext.Master=“spark://YOURSERVER:7070” cp /opt/YOU_PROJECT/lib/*.jar com.sillycat.YOUR_CLASS
 
Visit the homepage for Spark Master
 
3. Prepare Mysql
>sudo apt-get install software-properties-common
>sudo add-apt-repository ppa:ondrej/mysql-5.6
>sudo apt-get update
>sudo apt-get install mysql-server
 
Command to create the database and set up the password
>use mysql;
>grant all privileges on test.* to root@"%" identified by 'kaishi';
>flush privileges;
 
on the client, maybe only install mysql client
>sudo apt-get install mysql-client-core-5.6
 
Change the bind address in  sudo vi /etc/mysql/my.cnf
bind-address            = 127.0.0.1
>sudo service mysql stop
>sudo service mysql start
 
4. Install Grails
Download from here, I am using an old version.
 
5. Install tomcat on Master
 
Config the database in this file, TOMCAT_HOME/conf/context.xml
    <Resource name="jdbc/lmm" auth="Container" type="javax.sql.DataSource"
              maxIdle="30" maxWait="-1" maxActive="100"
              factory="org.apache.tomcat.jdbc.pool.DataSourceFactory"
              testOnBorrow="true"
              validationQuery="select 1"
              logAbandoned="true"
              username="root"
              password="kaishi"
              driverClassName="com.mysql.jdbc.Driver"
              url="jdbc:mysql://localhost:3306/lmm?autoReconnect=true&amp;useServerPrepStmts=false&amp;rewriteBatchedStatements=true"/>
 
Download and place the right mysql driver
> ls -l lib | grep mysql
-rw-r--r-- 1 carl carl  786484 Dec 10 09:30 mysql-connector-java-5.1.16.jar
 
Change the config to avoid OutOfMemoryError
> vi bin/catalina.sh 
JAVA_OPTS="$JAVA_OPTS -Xms2048m -Xmx2048m -XX:PermSize=256m -XX:MaxPermSize=512m"
 
6. Running Assembly Jar File
build the assembly jar and place in the lib directory, create a shell file in the bin directory
> cat bin/startup.sh
#!/bin/bash
 
java -Xms512m -Xmx1024m -Dbuild.env=lmm.sparkvm -Dspray.can.server.request-timeout=300s -Dspray.can.server.idle-timeout=360s -cp /opt/YOUR_MODULE/lib/*.jar com.sillycat,YOUPACKAGE.YOUMAINLCASS
 
Setup the Bouncy Castle Jar
>cd  /usr/lib/jvm/java-6-oracle/jre/lib/ext
>cd  /usr/lib/jvm/java-6-oracle/jre/lib/security
>sudo vi java.security 
security.provider.9=org.bouncycastle.jce.provider.BouncyCastleProvider
 
7. JCE Problem
Unzip the file and place the jar into this directory.
 
8. Command to Check data in cqlsh
Connect to cassandra
> cqlsh localhost 9160
 
Check the key space
cqlsh> select * from system.schema_keyspaces;
 
Check the version
cqlsh> show version
[cqlsh 3.1.8 | Cassandra 1.2.13 | CQL spec 3.0.0 | Thrift protocol 19.36.2]
 
Use the key space, something like database;
cqlsh> use device_lookup;
 
check the table
cqlsh:device_lookup> select count(*) from profile_devices limit 300000;
 
During testing, if need to clear the data
delete from profile_devices where deviceid = 'ios1009528' and brandcode = 'spark' and profileid = 5;
delete from profile_devices where  brandcode = 'spark' and profileid = 5;
 
Deployment Option One
1 Put a serialize class there.
package com.sillycat.easyspark.profile
import com.sillycat.easyspark.model.Attributes
import org.apache.spark.serializer.KryoRegistrator
import com.esotericsoftware.kryo.Kryo
import com.sillycat.easyspark.model.Profile
class ProfileKryoRegistrator extends KryoRegistrator {
  override def registerClasses(kryo: Kryo) {
   kryo.register(classOf[Attributes])
   kryo.register(classOf[Profile])
 }
}
 
Change the configuration and start SparkContent part as follow:
val config = ConfigFactory.load()

val conf = new SparkConf()
conf.setMaster(config.getString("sparkcontext.Master"))
conf.setAppName("Profile Device Update")

conf.setSparkHome(config.getString("sparkcontext.Home"))
if (config.hasPath("jobJar")) {
  conf.setJars(List(config.getString("jobJar")))
} else {
  conf.setJars(SparkContext.jarOfClass(this.getClass).toSeq)
}
conf.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
conf.set("spark.kryo.registrator"com.sillycat.easyspark.profile.ProfileKryoRegistrator")
val sc = new SparkContext(conf)
It works.
 
 
Tips
1. Command to Unzip the jar file
>jar xf jar-file
 
 
References:
cassandra
 
spark
 
ubuntu server
 
grails
 
bouncy castle
 
tomcat out of memory
 
Tips
 
Spark Trouble Shooting
 
 
分享到:
评论

相关推荐

    300页PPT讲述Spark DevOps进阶技巧

    根据提供的文件信息,我们可以提炼出关于Spark DevOps进阶技巧的相关知识点。首先,文件信息中提及了databricks,作为Apache Spark的创始人之一,在2013年底成立,原始团队来自伯克利大学的AMPLab,且已经完成了两轮...

    Implementing.DevOps.on.AWS.epub

    Work through practical examples and gain DevOps best practices to successfully deploy applications on AWS Successfully provision and operate distributed application systems and your AWS infrastructure...

    DevOps基础课程讲义

    * 不可变部署(Immutable Deployment): DevOps 通过不可变部署来确保基础设施的可靠性和一致性。 持续交付: * 持续集成(Continuous Integration): DevOps 通过持续集成来确保代码的正确性和可靠性。 * 持续...

    exin-devops所有课程相关考试

    1. **DevOps理念**:了解DevOps的基本概念,包括持续集成(Continuous Integration, CI)、持续部署(Continuous Deployment, CD)、持续交付(Continuous Delivery, CD)等,以及它们如何协同工作以实现快速迭代和...

    DevOps落地实践合集.zip

    DevOps 五大理念及其落地实践 研发运维一体化(DevOps)成熟度模型 中国DevOps现状调查报告及解读 构建企业DevOps的度量体系 DevOps实践指南精要 分布式敏捷和DevOps实践案例 AWS DevOps 助力爱乐奇大规模业务扩展 ...

    DevOps on the Microsoft Stack

    根据给定的文件信息,我们将聚焦于“DevOps on the Microsoft Stack”这一主题,以下将详细解读该主题所涉及的知识点。 知识点一:DevOps概述 DevOps是一个文化和实践的结合体,目的是改变软件开发(Dev)和信息...

    DEVOPS 成熟度模型

    DevOps成熟度模型中还定义了一系列术语,如配置项(CI)、制品(artifact)、代码复杂度(code complexity)和部署流水线(deployment pipeline)。配置项是指纳入配置管理范畴的工作成果,它是系统和项目的相关配置...

    Hands-On DevOps with Vagrant azw3

    Title: Hands-On DevOps with Vagrant Author: Alex Braunton Length: 232 pages Edition: 1 Language: English Publisher: Packt Publishing Publication Date: 2018-10-17 ISBN-10: 1789138051 ISBN-13: ...

    DevOps 实践

    ### DevOps 实践 #### 一、DevOps 的核心理念与价值 DevOps 是一种文化运动,强调开发(Development)与运维(Operations)之间的紧密合作与沟通,旨在提高组织交付应用程序和服务的能力,以更快的速度满足市场...

    DevOps with Kubernetes and Helm

    文章深入探讨了DevOps的关键实践,包括基础设施即代码(Infrastructure as Code)、持续集成(Continuous Integration)、持续部署(Continuous Deployment)、自动化测试(Automated Testing)、发布管理(Release ...

    简述DEVOPS.ppt

    DEVOPS 概述 DEVOPS 的历史可以追溯到计算机刚刚出现的时期,当时软件开发还是少数人通过高学历才能够掌握的技能。那时只有程序(Program),但没有软件(Software),所以那个时候编写程序的人员被称为程序员...

    DevOps Handbook

    5. **自动化**:自动化是DevOps的关键,涵盖了代码构建、测试、部署和基础设施管理等方面。通过自动化,可以减少手动错误,提高效率,同时使团队能够专注于创新。 6. **基础设施即代码(IAC)**:使用版本控制工具...

    DevOps_java_devops_

    - **持续部署(Continuous Deployment, CD)**:自动化流程,使代码更改能自动部署到生产环境。 - **文化转变**:鼓励跨职能团队的协作,促进敏捷和精益原则。 2. **Java在DevOps中的角色**: - **可移植性**:...

Global site tag (gtag.js) - Google Analytics