- 浏览: 2551245 次
- 性别:
- 来自: 成都
文章分类
最新评论
-
nation:
你好,在部署Mesos+Spark的运行环境时,出现一个现象, ...
Spark(4)Deal with Mesos -
sillycat:
AMAZON Relatedhttps://www.godad ...
AMAZON API Gateway(2)Client Side SSL with NGINX -
sillycat:
sudo usermod -aG docker ec2-use ...
Docker and VirtualBox(1)Set up Shared Disk for Virtual Box -
sillycat:
Every Half an Hour30 * * * * /u ...
Build Home NAS(3)Data Redundancy -
sillycat:
3 List the Cron Job I Have>c ...
Build Home NAS(3)Data Redundancy
Migrate Data from MySQL to DynamoDB
Directly writes to the DynamoDB
https://github.com/audienceproject/spark-dynamodb
I was thinking this should work, but it does not working at reading
%spark.dep
z.load("mysql:mysql-connector-java:5.1.47")
z.load("com.github.traviscrawford:spark-dynamodb:0.0.13")
z.load("com.audienceproject:spark-dynamodb_2.11:0.4.1")
This reading does not work
import com.audienceproject.spark.dynamodb.implicits._
val accountDF = spark.read.option("region","us-west-1").dynamodb("account-int-accounts")
accountDF.printSchema()
accountDF.show(2)
This reading work
import com.github.traviscrawford.spark.dynamodb._
val accountDF = sqlContext.read.dynamodb("us-west-1", "account-int-accounts")
accountDF.printSchema()
accountDF.show(1)
This is working for writing data, but I do not think it works well with the capacity
%spark.dep
z.load("mysql:mysql-connector-java:5.1.47")
z.load("com.github.traviscrawford:spark-dynamodb:0.0.13")
z.load("com.audienceproject:spark-dynamodb_2.11:0.4.1")
z.load("com.google.guava:guava:14.0.1")
import com.github.traviscrawford.spark.dynamodb._
val accountDF = sqlContext.read.dynamodb("us-west-1", "account-int-accounts")
accountDF.printSchema()
accountDF.show(1)
import com.audienceproject.spark.dynamodb.implicits._
accountDF.write.option("region", "us-west-1").dynamodb("account-int-accounts2")
The Read Works as Well
import com.audienceproject.spark.dynamodb.implicits._
val dynamoDF = spark.read.option("region", "us-west-1").dynamodb("account-int-accounts")
dynamoDF.printSchema()
dynamoDF.show(5)
DynamoDB Format and AWS Command
https://github.com/lmammino/json-dynamo-putrequest
First of all, prepare the JSON file on the server, usually I will download that
> hdfs dfs -get hdfs://localhost:9000/mysqltodynamodb/account2 ./account2
Find the JSON file account2.json
Install NodeJS if it is not on the system
> sudo apt install nodejs
> sudo apt install npm
> node --version && npm --version
v8.10.0
3.5.2
Install the software
> sudo npm install --global json-dynamo-putrequest
Check installation
> json-dynamo-putrequest --help
> json-dynamo-putrequest --version
1.0.0
Command
> json-dynamo-putrequest account-int-accounts2 --output account-dynamo.json < account2.json
Error: Input data needs to be an array
Add [ and ], replace } to }, try the data again.
> json-dynamo-putrequest account-int-accounts2 --output account-dynamo.json < account2.json
Output saved in /home/ubuntu/data/account-dynamo.json
File is ready as account-dynamo.json
https://github.com/lmammino/json-dynamo-putrequest
Then follow documents to import data into Table
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/SampleData.LoadData.html
Create a table from the console, execute the import command
> aws dynamodb batch-write-item --request-items file:///home/ubuntu/data/account-dynamo.json
at 'requestItems' failed to satisfy constraint: Map value must satisfy constraint: [Member must have length less than or equal to 25, Member must have length greater than or equal to 1]
Haha, in the docs, all the sample are less than 25 items.
Directly write NodeJS to parse the JSON file and do the import works
"use strict";
// How to run
// node dynamodb-scripts/import-devices-to-dynamodb.js {ENV} ./css_devices_only_csv.txt
// eg: node dynamodb-scripts/import-devices-to-dynamodb.js int ./css_devices_only_csv.txt
var importDevicesToDynamo;
process.env.AWS_SDK_LOAD_CONFIG = true;
(function (importDevicesToDynamo) {
const fs = require('fs');
const babyparse = require("babyparse");
const AWS = require("aws-sdk");
const log4js = require('log4js');
const logger = log4js.getLogger();
const sleep = require('sleep');
const env = process.argv[2]; // Must be int, stage or prod
const csvFilePath = process.argv[3];
const config = {
delimiter: ',',
newline: "",
quoteChar: '"',
header: true,
dynamicTyping: false,
preview: 0,
encoding: "utf8",
worker: false,
comments: false,
skipEmptyLines: true
};
let tableName = `lifesize_device-${env}-devicePairingInfo`;
let accessKey = "";
let signatureKey = "";
let region = "";
let dynamoDbUrl = "";
//validate parameters
if (!env) {
console.log("\nMust pass in environment for 1st argument. Must be one of 'int, 'stage' or 'prod'");
console.log("\nUsage - node dynamodb-scripts/import-devices-to-dynamodb.js {env} {csv path/file } ");
console.log("\nExample - node dynamodb-scripts/import-devices-to-dynamodb.js int ./css_devices_only_csv.txt \n");
process.exit(1);
}
if (!csvFilePath) {
console.log("\nMust pass in csvFilePath for 2nd argument.");
console.log("\nUsage - node dynamodb-scripts/import-devices-to-dynamodb.js {env} {csv path/file } ");
console.log("\nExample - node dynamodb-scripts/import-devices-to-dynamodb.js int ./css_devices_only_csv.txt \n");
process.exit(2);
}
console.log("Env = " + env);
console.log("File to import = " + csvFilePath);
let content = fs.readFileSync(csvFilePath, config);
let parsed = babyparse.parse(content, config);
let rows = JSON.parse(JSON.stringify(parsed.data));
console.log("Row count = " + Object.keys(rows).length);
let _id;
// For the batch size of 10, we need to temporarily change the write capacity units to 50 in DynaoDB for the appropriate table, then reset to default when script is finished
let size = 10;
console.log("dynamoDbURL = " + dynamoDbUrl);
console.log("tableName = " + tableName);
var credentials = new AWS.SharedIniFileCredentials();
AWS.config.credentials = credentials;
const dynamoDb = new AWS.DynamoDB.DocumentClient();
let uniqueSerialNumbers = [];
for (let i = 0; i < rows.length; i += size) {
// Slice the array into smaller arrays of 10,000 items
let smallarray = rows.slice(i, i + size);
console.log("i = " + i + " serialNumber = " + smallarray[0].serialNumber);
let batchItems = smallarray.map(function (item) {
try {
const serialNumber = item.serialNumber;
if (uniqueSerialNumbers.includes(serialNumber)) {
//console.log("System ignore duplicated record", item);
return null;
} else {
uniqueSerialNumbers.push(serialNumber);
}
// Replace empty string values with null. DynamoDB doesn't allow empty strings, will throw error on request.
for (let items in item) {
let value = item[items];
if (value === undefined || value === "") {
item[items] = null;
}
if (items == "enabled") {
if (value === "f") {
item[items] = false;
} else if (value === "t") {
item[items] = true;
}
}
}
item.adminAccountUUID = null;
item.sessionID = null;
item.pairingCodeCreateTime = null;
if(item.systemName === null){
item.systemName = item.userExtension.toString()
}
if(item.pairingstatus === 'DEFAULT'){
item.pairingstatus = "COMPLETE"
}
if(item.plaform === 'GRAPHITE'){
item.deviceUUID = item.serialNumber
}
if(item.userExtension && !item.extension) {
item.extension = item.userExtension.toString();
console.log(`++++++++++++++++++++++++++++++++++++`);
}
let params = {
PutRequest: { Item: JSON.parse(JSON.stringify(item)) }
};
console.log("params = " + JSON.stringify(params, null, 2));
return params;
}
catch (error) {
console.log("**** ERROR processing file: " + error);
}
}).filter((obj) =>
obj !== null
);
if (batchItems.length === 0) {
console.log("System filter all the dupicate data, nothing left");
continue;
}
let batchRequestParams = '{"RequestItems":{"' + tableName + '":' + JSON.stringify(batchItems) + '},"ReturnConsumedCapacity":"TOTAL","ReturnItemCollectionMetrics": "SIZE"}';
console.log("batchRequestParams ============================================================ ");// + batchRequestParams);
callDynamo(batchRequestParams).then(function (data) {
sleep.msleep(100);
}).catch(console.error);
}
function callDynamo(batchRequestParams) {
return new Promise(function (resolve, reject) {
dynamoDb.batchWrite(JSON.parse(batchRequestParams), function (err, data) {
try {
if (err) {
logger.error(`Error - ${err} = Trying again:`);
sleep.msleep(100);
dynamoDb.batchWrite(JSON.parse(batchRequestParams), function (err, data) {
try {
if (err) {
//console.log("------------- data is beauty:", batchRequestParams);
logger.error("Unable to add item a 2nd time, Error:", err);
return reject(err);
}
else {
logger.debug("2nd PutItem succeeded");
resolve(data);
}
}
catch (error) {
//console.log("------------- data is here:", batchRequestParams);
console.log("error calling DynamoDB - " + error);
return reject(err);
}
});
}
else {
logger.debug("PutItem succeeded");
resolve(data);
}
}
catch (error) {
console.log("error calling DynamoDB - " + error);
return reject(err);
}
});
});
}
})(importDevicesToDynamo || (importDevicesToDynamo = {}));
References:
https://github.com/audienceproject/spark-dynamodb
https://stackoverflow.com/questions/37444607/writing-from-spark-to-dynamodb
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/SampleData.LoadData.html
https://github.com/lmammino/json-dynamo-putrequest
Directly writes to the DynamoDB
https://github.com/audienceproject/spark-dynamodb
I was thinking this should work, but it does not working at reading
%spark.dep
z.load("mysql:mysql-connector-java:5.1.47")
z.load("com.github.traviscrawford:spark-dynamodb:0.0.13")
z.load("com.audienceproject:spark-dynamodb_2.11:0.4.1")
This reading does not work
import com.audienceproject.spark.dynamodb.implicits._
val accountDF = spark.read.option("region","us-west-1").dynamodb("account-int-accounts")
accountDF.printSchema()
accountDF.show(2)
This reading work
import com.github.traviscrawford.spark.dynamodb._
val accountDF = sqlContext.read.dynamodb("us-west-1", "account-int-accounts")
accountDF.printSchema()
accountDF.show(1)
This is working for writing data, but I do not think it works well with the capacity
%spark.dep
z.load("mysql:mysql-connector-java:5.1.47")
z.load("com.github.traviscrawford:spark-dynamodb:0.0.13")
z.load("com.audienceproject:spark-dynamodb_2.11:0.4.1")
z.load("com.google.guava:guava:14.0.1")
import com.github.traviscrawford.spark.dynamodb._
val accountDF = sqlContext.read.dynamodb("us-west-1", "account-int-accounts")
accountDF.printSchema()
accountDF.show(1)
import com.audienceproject.spark.dynamodb.implicits._
accountDF.write.option("region", "us-west-1").dynamodb("account-int-accounts2")
The Read Works as Well
import com.audienceproject.spark.dynamodb.implicits._
val dynamoDF = spark.read.option("region", "us-west-1").dynamodb("account-int-accounts")
dynamoDF.printSchema()
dynamoDF.show(5)
DynamoDB Format and AWS Command
https://github.com/lmammino/json-dynamo-putrequest
First of all, prepare the JSON file on the server, usually I will download that
> hdfs dfs -get hdfs://localhost:9000/mysqltodynamodb/account2 ./account2
Find the JSON file account2.json
Install NodeJS if it is not on the system
> sudo apt install nodejs
> sudo apt install npm
> node --version && npm --version
v8.10.0
3.5.2
Install the software
> sudo npm install --global json-dynamo-putrequest
Check installation
> json-dynamo-putrequest --help
> json-dynamo-putrequest --version
1.0.0
Command
> json-dynamo-putrequest account-int-accounts2 --output account-dynamo.json < account2.json
Error: Input data needs to be an array
Add [ and ], replace } to }, try the data again.
> json-dynamo-putrequest account-int-accounts2 --output account-dynamo.json < account2.json
Output saved in /home/ubuntu/data/account-dynamo.json
File is ready as account-dynamo.json
https://github.com/lmammino/json-dynamo-putrequest
Then follow documents to import data into Table
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/SampleData.LoadData.html
Create a table from the console, execute the import command
> aws dynamodb batch-write-item --request-items file:///home/ubuntu/data/account-dynamo.json
at 'requestItems' failed to satisfy constraint: Map value must satisfy constraint: [Member must have length less than or equal to 25, Member must have length greater than or equal to 1]
Haha, in the docs, all the sample are less than 25 items.
Directly write NodeJS to parse the JSON file and do the import works
"use strict";
// How to run
// node dynamodb-scripts/import-devices-to-dynamodb.js {ENV} ./css_devices_only_csv.txt
// eg: node dynamodb-scripts/import-devices-to-dynamodb.js int ./css_devices_only_csv.txt
var importDevicesToDynamo;
process.env.AWS_SDK_LOAD_CONFIG = true;
(function (importDevicesToDynamo) {
const fs = require('fs');
const babyparse = require("babyparse");
const AWS = require("aws-sdk");
const log4js = require('log4js');
const logger = log4js.getLogger();
const sleep = require('sleep');
const env = process.argv[2]; // Must be int, stage or prod
const csvFilePath = process.argv[3];
const config = {
delimiter: ',',
newline: "",
quoteChar: '"',
header: true,
dynamicTyping: false,
preview: 0,
encoding: "utf8",
worker: false,
comments: false,
skipEmptyLines: true
};
let tableName = `lifesize_device-${env}-devicePairingInfo`;
let accessKey = "";
let signatureKey = "";
let region = "";
let dynamoDbUrl = "";
//validate parameters
if (!env) {
console.log("\nMust pass in environment for 1st argument. Must be one of 'int, 'stage' or 'prod'");
console.log("\nUsage - node dynamodb-scripts/import-devices-to-dynamodb.js {env} {csv path/file } ");
console.log("\nExample - node dynamodb-scripts/import-devices-to-dynamodb.js int ./css_devices_only_csv.txt \n");
process.exit(1);
}
if (!csvFilePath) {
console.log("\nMust pass in csvFilePath for 2nd argument.");
console.log("\nUsage - node dynamodb-scripts/import-devices-to-dynamodb.js {env} {csv path/file } ");
console.log("\nExample - node dynamodb-scripts/import-devices-to-dynamodb.js int ./css_devices_only_csv.txt \n");
process.exit(2);
}
console.log("Env = " + env);
console.log("File to import = " + csvFilePath);
let content = fs.readFileSync(csvFilePath, config);
let parsed = babyparse.parse(content, config);
let rows = JSON.parse(JSON.stringify(parsed.data));
console.log("Row count = " + Object.keys(rows).length);
let _id;
// For the batch size of 10, we need to temporarily change the write capacity units to 50 in DynaoDB for the appropriate table, then reset to default when script is finished
let size = 10;
console.log("dynamoDbURL = " + dynamoDbUrl);
console.log("tableName = " + tableName);
var credentials = new AWS.SharedIniFileCredentials();
AWS.config.credentials = credentials;
const dynamoDb = new AWS.DynamoDB.DocumentClient();
let uniqueSerialNumbers = [];
for (let i = 0; i < rows.length; i += size) {
// Slice the array into smaller arrays of 10,000 items
let smallarray = rows.slice(i, i + size);
console.log("i = " + i + " serialNumber = " + smallarray[0].serialNumber);
let batchItems = smallarray.map(function (item) {
try {
const serialNumber = item.serialNumber;
if (uniqueSerialNumbers.includes(serialNumber)) {
//console.log("System ignore duplicated record", item);
return null;
} else {
uniqueSerialNumbers.push(serialNumber);
}
// Replace empty string values with null. DynamoDB doesn't allow empty strings, will throw error on request.
for (let items in item) {
let value = item[items];
if (value === undefined || value === "") {
item[items] = null;
}
if (items == "enabled") {
if (value === "f") {
item[items] = false;
} else if (value === "t") {
item[items] = true;
}
}
}
item.adminAccountUUID = null;
item.sessionID = null;
item.pairingCodeCreateTime = null;
if(item.systemName === null){
item.systemName = item.userExtension.toString()
}
if(item.pairingstatus === 'DEFAULT'){
item.pairingstatus = "COMPLETE"
}
if(item.plaform === 'GRAPHITE'){
item.deviceUUID = item.serialNumber
}
if(item.userExtension && !item.extension) {
item.extension = item.userExtension.toString();
console.log(`++++++++++++++++++++++++++++++++++++`);
}
let params = {
PutRequest: { Item: JSON.parse(JSON.stringify(item)) }
};
console.log("params = " + JSON.stringify(params, null, 2));
return params;
}
catch (error) {
console.log("**** ERROR processing file: " + error);
}
}).filter((obj) =>
obj !== null
);
if (batchItems.length === 0) {
console.log("System filter all the dupicate data, nothing left");
continue;
}
let batchRequestParams = '{"RequestItems":{"' + tableName + '":' + JSON.stringify(batchItems) + '},"ReturnConsumedCapacity":"TOTAL","ReturnItemCollectionMetrics": "SIZE"}';
console.log("batchRequestParams ============================================================ ");// + batchRequestParams);
callDynamo(batchRequestParams).then(function (data) {
sleep.msleep(100);
}).catch(console.error);
}
function callDynamo(batchRequestParams) {
return new Promise(function (resolve, reject) {
dynamoDb.batchWrite(JSON.parse(batchRequestParams), function (err, data) {
try {
if (err) {
logger.error(`Error - ${err} = Trying again:`);
sleep.msleep(100);
dynamoDb.batchWrite(JSON.parse(batchRequestParams), function (err, data) {
try {
if (err) {
//console.log("------------- data is beauty:", batchRequestParams);
logger.error("Unable to add item a 2nd time, Error:", err);
return reject(err);
}
else {
logger.debug("2nd PutItem succeeded");
resolve(data);
}
}
catch (error) {
//console.log("------------- data is here:", batchRequestParams);
console.log("error calling DynamoDB - " + error);
return reject(err);
}
});
}
else {
logger.debug("PutItem succeeded");
resolve(data);
}
}
catch (error) {
console.log("error calling DynamoDB - " + error);
return reject(err);
}
});
});
}
})(importDevicesToDynamo || (importDevicesToDynamo = {}));
References:
https://github.com/audienceproject/spark-dynamodb
https://stackoverflow.com/questions/37444607/writing-from-spark-to-dynamodb
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/SampleData.LoadData.html
https://github.com/lmammino/json-dynamo-putrequest
发表评论
-
Update Site will come soon
2021-06-02 04:10 1677I am still keep notes my tech n ... -
Stop Update Here
2020-04-28 09:00 315I will stop update here, and mo ... -
NodeJS12 and Zlib
2020-04-01 07:44 475NodeJS12 and Zlib It works as ... -
Docker Swarm 2020(2)Docker Swarm and Portainer
2020-03-31 23:18 367Docker Swarm 2020(2)Docker Swar ... -
Docker Swarm 2020(1)Simply Install and Use Swarm
2020-03-31 07:58 368Docker Swarm 2020(1)Simply Inst ... -
Traefik 2020(1)Introduction and Installation
2020-03-29 13:52 336Traefik 2020(1)Introduction and ... -
Portainer 2020(4)Deploy Nginx and Others
2020-03-20 12:06 430Portainer 2020(4)Deploy Nginx a ... -
Private Registry 2020(1)No auth in registry Nginx AUTH for UI
2020-03-18 00:56 435Private Registry 2020(1)No auth ... -
Docker Compose 2020(1)Installation and Basic
2020-03-15 08:10 373Docker Compose 2020(1)Installat ... -
VPN Server 2020(2)Docker on CentOS in Ubuntu
2020-03-02 08:04 454VPN Server 2020(2)Docker on Cen ... -
Buffer in NodeJS 12 and NodeJS 8
2020-02-25 06:43 384Buffer in NodeJS 12 and NodeJS ... -
NodeJS ENV Similar to JENV and PyENV
2020-02-25 05:14 477NodeJS ENV Similar to JENV and ... -
Prometheus HA 2020(3)AlertManager Cluster
2020-02-24 01:47 421Prometheus HA 2020(3)AlertManag ... -
Serverless with NodeJS and TencentCloud 2020(5)CRON and Settings
2020-02-24 01:46 337Serverless with NodeJS and Tenc ... -
GraphQL 2019(3)Connect to MySQL
2020-02-24 01:48 246GraphQL 2019(3)Connect to MySQL ... -
GraphQL 2019(2)GraphQL and Deploy to Tencent Cloud
2020-02-24 01:48 450GraphQL 2019(2)GraphQL and Depl ... -
GraphQL 2019(1)Apollo Basic
2020-02-19 01:36 326GraphQL 2019(1)Apollo Basic Cl ... -
Serverless with NodeJS and TencentCloud 2020(4)Multiple Handlers and Running wit
2020-02-19 01:19 312Serverless with NodeJS and Tenc ... -
Serverless with NodeJS and TencentCloud 2020(3)Build Tree and Traverse Tree
2020-02-19 01:19 317Serverless with NodeJS and Tenc ... -
Serverless with NodeJS and TencentCloud 2020(2)Trigger SCF in SCF
2020-02-19 01:18 292Serverless with NodeJS and Tenc ...
相关推荐
With the rapid growth of MySQL in the database market, many corporations, government agencies, educational institutions, and others have begun to migrate away from their expensive and proprietary ...
SAP S_4HANA Migration Cockpit - Migrate your Data to SAP S_4HANA.pdf
标题中的“Paragon Migrate OS to SSD v4.0 x64”是一款专门用于操作系统迁移的软件工具,由Paragon Software公司开发。该版本号“v4.0”表明这是该软件的第四次主要更新,而“x64”则表示它支持64位操作系统。后缀...
Migrate ARM Compiler 5 to ARM Compiler 6 中文翻译版本
在本项目中,"migrate-data-dynamodb-streams-lambda" 是一个利用Amazon DynamoDB的流(Streams)功能和AWS Lambda服务来迁移或处理预先存在的数据的解决方案。DynamoDB是亚马逊提供的一种高性能、完全托管的NoSQL...
【标题】"migratedata-开源" 在信息技术领域,数据迁移是一项重要的任务,尤其是在网站或应用程序需要更换数据库系统时。开源工具如 "migratedata" 为此提供了方便。这款工具设计的目标是帮助用户平滑地从一个...
#### 标题解析:Migrate from Oracle 9i RAC to 10g RAC 标题明确了本文档的主要内容是关于如何从Oracle 9i Real Application Clusters (RAC) 数据库迁移至Oracle 10g RAC数据库的过程。这涉及到数据库架构、软件...
最后,使用Oracle的SQL*Loader或Data Pump Import等工具将数据导入Oracle数据库。 #### 5. **测试与验证** 迁移完成后,必须进行全面的功能测试和性能测试,确保所有业务逻辑正确无误地在新环境中运行,并且性能...
migrate from on-premises to Azure SQL Database Managed Instance
动力发电机 dynamodb的迁移框架 设置 为AWS_DYNAMODB_ENDPOINT分配DynamoDB URL。...测试 开始运行本地dynamo-db npm测试 ... var dynmigrate = require('dynamodb-migrate'); dynmigrate.migrate();
它支持多种主流数据库系统,包括MySQL、PostgreSQL、Cassandra和SQLite,这使得它成为一个灵活且跨平台的解决方案。 首先,让我们详细了解Go-migrate的工作原理。数据库迁移通常涉及对数据库结构的变更,如添加或...
How to Migrate from On-premises to Office 365, https://docs.microsoft.com/zh-cn/sharepointmigration/introducing-the-sharepoint-migration-tool
《Windows升级10安装工具详解》 Windows升级10安装工具,全称为“Media Creation Tool”,是微软官方推出的一款实用程序,专为帮助用户轻松、快捷地将操作系统从Windows 7或Windows 8/8.1升级到Windows 10而设计。...
Migrate ConcourseConnect from Postgre to MySQL
Upgrade, Migrate & Consolidate to Oracle Database 12c: Strategies, General Preparation Steps, Upgrade & Migration Cases; Fallback Strategies; New Features in Oracle 12c; Performance Management.
If you've already worked with MySQL before and are looking to migrate your application to MySQL 8, this book will also show you how to do that. The book also contains recipes on efficient MySQL ...
Enter Pro Core Data for iOS, written for developers who have learned the basics of iOS development and are ready to dive deeper into topics surrounding data storage to take their apps from pretty ...
Teaches you to start up Nginx and quickly take your expertise to a level where you can comfortably work with various aspects of the web ...Learn how and what to migrate from IIS & Apache web servers.