- 浏览: 1482893 次
- 性别:
- 来自: 北京
文章分类
- 全部博客 (691)
- linux (207)
- shell (33)
- java (42)
- 其他 (22)
- javascript (33)
- cloud (16)
- python (33)
- c (48)
- sql (12)
- 工具 (6)
- 缓存 (16)
- ubuntu (7)
- perl (3)
- lua (2)
- 超级有用 (2)
- 服务器 (2)
- mac (22)
- nginx (34)
- php (2)
- 内核 (2)
- gdb (13)
- ICTCLAS (2)
- mac android (0)
- unix (1)
- android (1)
- vim (1)
- epoll (1)
- ios (21)
- mysql (3)
- systemtap (1)
- 算法 (2)
- 汇编 (2)
- arm (3)
- 我的数据结构 (8)
- websocket (12)
- hadoop (5)
- thrift (2)
- hbase (1)
- graphviz (1)
- redis (1)
- raspberry (2)
- qemu (31)
- opencv (4)
- socket (1)
- opengl (1)
- ibeacons (1)
- emacs (6)
- openstack (24)
- docker (1)
- webrtc (11)
- angularjs (2)
- neutron (23)
- jslinux (18)
- 网络 (13)
- tap (9)
- tensorflow (8)
- nlu (4)
- asm.js (5)
- sip (3)
- xl2tp (5)
- conda (1)
- emscripten (6)
- ffmpeg (10)
- srt (1)
- wasm (5)
- bert (3)
- kaldi (4)
- 知识图谱 (1)
最新评论
-
wahahachuang8:
我喜欢代码简洁易读,服务稳定的推送服务,前段时间研究了一下go ...
websocket的helloworld -
q114687576:
http://www.blue-zero.com/WebSoc ...
websocket的helloworld -
zhaoyanzimm:
感谢您的分享,给我提供了很大的帮助,在使用过程中发现了一个问题 ...
nginx的helloworld模块的helloworld -
haoningabc:
leebyte 写道太NB了,期待早日用上Killinux!么 ...
qemu+emacs+gdb调试内核 -
leebyte:
太NB了,期待早日用上Killinux!
qemu+emacs+gdb调试内核
线性回归处理 wx+b的线性问题
对数几率回归处理 yes 和no的问题
softmax处理分类问题
《tensorflow for machine intelligence》
《面向机器智能tensorflow实践》 的源码:
https://github.com/backstopmedia/tensorflowbook
泰坦尼克的例子
https://www.kaggle.com/c/titanic/data
对数几率回归
https://blog.csdn.net/hongbin_xu/article/details/78270526
对数几率回归判断yes no
softmax.py
iris.data
使用数据的时候注意啊
Be sure to remove the last empty line of it before running the example
官方默认是有好几个空行,不去掉读数据的时候会数组越界
对数几率回归处理 yes 和no的问题
softmax处理分类问题
《tensorflow for machine intelligence》
《面向机器智能tensorflow实践》 的源码:
https://github.com/backstopmedia/tensorflowbook
# Linear regression example in TF. import tensorflow as tf W = tf.Variable(tf.zeros([2, 1]), name="weights") b = tf.Variable(0., name="bias") def inference(X): return tf.matmul(X, W) + b def loss(X, Y): Y_predicted = tf.transpose(inference(X)) # make it a row vector return tf.reduce_sum(tf.squared_difference(Y, Y_predicted)) def inputs(): # Data from http://people.sc.fsu.edu/~jburkardt/datasets/regression/x09.txt weight_age = [[84, 46], [73, 20], [65, 52], [70, 30], [76, 57], [69, 25], [63, 28], [72, 36], [79, 57], [75, 44], [27, 24], [89, 31], [65, 52], [57, 23], [59, 60], [69, 48], [60, 34], [79, 51], [75, 50], [82, 34], [59, 46], [67, 23], [85, 37], [55, 40], [63, 30]] blood_fat_content = [354, 190, 405, 263, 451, 302, 288, 385, 402, 365, 209, 290, 346, 254, 395, 434, 220, 374, 308, 220, 311, 181, 274, 303, 244] return tf.to_float(weight_age), tf.to_float(blood_fat_content) def train(total_loss): learning_rate = 0.000001 return tf.train.GradientDescentOptimizer(learning_rate).minimize(total_loss) def evaluate(sess, X, Y): print(sess.run(inference([[50., 20.]]))) # ~ 303 print(sess.run(inference([[50., 70.]]))) # ~ 256 print(sess.run(inference([[90., 20.]]))) # ~ 303 print(sess.run(inference([[90., 70.]]))) # ~ 256 # Launch the graph in a session, setup boilerplate with tf.Session() as sess: tf.initialize_all_variables().run() X, Y = inputs() total_loss = loss(X, Y) train_op = train(total_loss) coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(sess=sess, coord=coord) # actual training loop training_steps = 10000 for step in range(training_steps): sess.run([train_op]) if step % 1000 == 0: print("Epoch:", step, " loss: ", sess.run(total_loss)) print("Final model W=", sess.run(W), "b=", sess.run(b)) evaluate(sess, X, Y) coord.request_stop() coord.join(threads) sess.close()
泰坦尼克的例子
https://www.kaggle.com/c/titanic/data
对数几率回归
https://blog.csdn.net/hongbin_xu/article/details/78270526
对数几率回归判断yes no
# Logistic regression example in TF using Kaggle's Titanic Dataset. # Download train.csv from https://www.kaggle.com/c/titanic/data import tensorflow as tf import os # same params and variables initialization as log reg. W = tf.Variable(tf.zeros([5, 1]), name="weights") b = tf.Variable(0., name="bias") # former inference is now used for combining inputs def combine_inputs(X): return tf.matmul(X, W) + b # new inferred value is the sigmoid applied to the former def inference(X): return tf.sigmoid(combine_inputs(X)) def loss(X, Y): return tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=combine_inputs(X), logits=Y)) def read_csv(batch_size, file_name, record_defaults): filename_queue = tf.train.string_input_producer([os.path.join(os.getcwd(), file_name)]) reader = tf.TextLineReader(skip_header_lines=1) key, value = reader.read(filename_queue) # decode_csv will convert a Tensor from type string (the text line) in # a tuple of tensor columns with the specified defaults, which also # sets the data type for each column decoded = tf.decode_csv(value, record_defaults=record_defaults) # batch actually reads the file and loads "batch_size" rows in a single tensor return tf.train.shuffle_batch(decoded, batch_size=batch_size, capacity=batch_size * 50, min_after_dequeue=batch_size) def inputs(): passenger_id, survived, pclass, name, sex, age, sibsp, parch, ticket, fare, cabin, embarked = \ read_csv(100, "train.csv", [[0.0], [0.0], [0], [""], [""], [0.0], [0.0], [0.0], [""], [0.0], [""], [""]]) # convert categorical data is_first_class = tf.to_float(tf.equal(pclass, [1])) is_second_class = tf.to_float(tf.equal(pclass, [2])) is_third_class = tf.to_float(tf.equal(pclass, [3])) gender = tf.to_float(tf.equal(sex, ["female"])) # Finally we pack all the features in a single matrix; # We then transpose to have a matrix with one example per row and one feature per column. features = tf.transpose(tf.stack([is_first_class, is_second_class, is_third_class, gender, age])) survived = tf.reshape(survived, [100, 1]) return features, survived def train(total_loss): learning_rate = 0.01 return tf.train.GradientDescentOptimizer(learning_rate).minimize(total_loss) def evaluate(sess, X, Y): predicted = tf.cast(inference(X) > 0.5, tf.float32) print(sess.run(tf.reduce_mean(tf.cast(tf.equal(predicted, Y), tf.float32)))) # Launch the graph in a session, setup boilerplate with tf.Session() as sess: tf.initialize_all_variables().run() X, Y = inputs() total_loss = loss(X, Y) train_op = train(total_loss) coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(sess=sess, coord=coord) # actual training loop training_steps = 1000 for step in range(training_steps): sess.run([train_op]) # for debugging and learning purposes, see how the loss gets decremented thru training steps if step % 10 == 0: print("loss: ", sess.run([total_loss])) evaluate(sess, X, Y) import time time.sleep(5) coord.request_stop() coord.join(threads) sess.close()
softmax.py
# Softmax example in TF using the classical Iris dataset # Download iris.data from https://archive.ics.uci.edu/ml/datasets/Iris # Be sure to remove the last empty line of it before running the example import tensorflow as tf import os # this time weights form a matrix, not a column vector, one "weight vector" per class. W = tf.Variable(tf.zeros([4, 3]), name="weights") # so do the biases, one per class. b = tf.Variable(tf.zeros([3]), name="bias") def combine_inputs(X): return tf.matmul(X, W) + b def inference(X): return tf.nn.softmax(combine_inputs(X)) def loss(X, Y): return tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(logits=combine_inputs(X), labels=Y)) def read_csv(batch_size, file_name, record_defaults): filename_queue = tf.train.string_input_producer([os.path.dirname(os.path.abspath(__file__)) + "/" + file_name]) print("hehehehe------>") print(os.path.dirname(os.path.abspath(__file__))) reader = tf.TextLineReader() key, value = reader.read(filename_queue) # decode_csv will convert a Tensor from type string (the text line) in # a tuple of tensor columns with the specified defaults, which also # sets the data type for each column decoded = tf.decode_csv(value, record_defaults=record_defaults) # batch actually reads the file and loads "batch_size" rows in a single tensor return tf.train.shuffle_batch(decoded, batch_size=batch_size, capacity=batch_size * 50, min_after_dequeue=batch_size) def inputs(): sepal_length, sepal_width, petal_length, petal_width, label =\ read_csv(50, "iris.data", [[0.0], [0.0], [0.0], [0.0], [""]]) # convert class names to a 0 based class index. label_number = tf.to_int32(tf.argmax(tf.to_int32(tf.stack([ tf.equal(label, ["Iris-setosa"]), tf.equal(label, ["Iris-versicolor"]), tf.equal(label, ["Iris-virginica"]) ])), 0)) # Pack all the features that we care about in a single matrix; # We then transpose to have a matrix with one example per row and one feature per column. features = tf.transpose(tf.stack([sepal_length, sepal_width, petal_length, petal_width])) return features, label_number def train(total_loss): learning_rate = 0.01 return tf.train.GradientDescentOptimizer(learning_rate).minimize(total_loss) def evaluate(sess, X, Y): predicted = tf.cast(tf.arg_max(inference(X), 1), tf.int32) print(sess.run(tf.reduce_mean(tf.cast(tf.equal(predicted, Y), tf.float32)))) # Launch the graph in a session, setup boilerplate with tf.Session() as sess: tf.initialize_all_variables().run() #tf.global_variables_initializer().run() X, Y = inputs() total_loss = loss(X, Y) train_op = train(total_loss) coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(sess=sess, coord=coord) # actual training loop training_steps = 1000 for step in range(training_steps): sess.run([train_op]) # for debugging and learning purposes, see how the loss gets decremented thru training steps if step % 10 == 0: print("loss: ", sess.run([total_loss])) evaluate(sess, X, Y) coord.request_stop() coord.join(threads) sess.close()
iris.data
5.1,3.5,1.4,0.2,Iris-setosa 4.9,3.0,1.4,0.2,Iris-setosa 4.7,3.2,1.3,0.2,Iris-setosa 4.6,3.1,1.5,0.2,Iris-setosa 5.0,3.6,1.4,0.2,Iris-setosa 5.4,3.9,1.7,0.4,Iris-setosa 4.6,3.4,1.4,0.3,Iris-setosa 5.0,3.4,1.5,0.2,Iris-setosa 4.4,2.9,1.4,0.2,Iris-setosa 4.9,3.1,1.5,0.1,Iris-setosa 5.4,3.7,1.5,0.2,Iris-setosa 4.8,3.4,1.6,0.2,Iris-setosa 4.8,3.0,1.4,0.1,Iris-setosa 4.3,3.0,1.1,0.1,Iris-setosa 5.8,4.0,1.2,0.2,Iris-setosa 5.7,4.4,1.5,0.4,Iris-setosa 5.4,3.9,1.3,0.4,Iris-setosa 5.1,3.5,1.4,0.3,Iris-setosa 5.7,3.8,1.7,0.3,Iris-setosa 5.1,3.8,1.5,0.3,Iris-setosa 5.4,3.4,1.7,0.2,Iris-setosa 5.1,3.7,1.5,0.4,Iris-setosa 4.6,3.6,1.0,0.2,Iris-setosa 5.1,3.3,1.7,0.5,Iris-setosa 4.8,3.4,1.9,0.2,Iris-setosa 5.0,3.0,1.6,0.2,Iris-setosa 5.0,3.4,1.6,0.4,Iris-setosa 5.2,3.5,1.5,0.2,Iris-setosa 5.2,3.4,1.4,0.2,Iris-setosa 4.7,3.2,1.6,0.2,Iris-setosa 4.8,3.1,1.6,0.2,Iris-setosa 5.4,3.4,1.5,0.4,Iris-setosa 5.2,4.1,1.5,0.1,Iris-setosa 5.5,4.2,1.4,0.2,Iris-setosa 4.9,3.1,1.5,0.1,Iris-setosa 5.0,3.2,1.2,0.2,Iris-setosa 5.5,3.5,1.3,0.2,Iris-setosa 4.9,3.1,1.5,0.1,Iris-setosa 4.4,3.0,1.3,0.2,Iris-setosa 5.1,3.4,1.5,0.2,Iris-setosa 5.0,3.5,1.3,0.3,Iris-setosa 4.5,2.3,1.3,0.3,Iris-setosa 4.4,3.2,1.3,0.2,Iris-setosa 5.0,3.5,1.6,0.6,Iris-setosa 5.1,3.8,1.9,0.4,Iris-setosa 4.8,3.0,1.4,0.3,Iris-setosa 5.1,3.8,1.6,0.2,Iris-setosa 4.6,3.2,1.4,0.2,Iris-setosa 5.3,3.7,1.5,0.2,Iris-setosa 5.0,3.3,1.4,0.2,Iris-setosa 7.0,3.2,4.7,1.4,Iris-versicolor 6.4,3.2,4.5,1.5,Iris-versicolor 6.9,3.1,4.9,1.5,Iris-versicolor 5.5,2.3,4.0,1.3,Iris-versicolor 6.5,2.8,4.6,1.5,Iris-versicolor 5.7,2.8,4.5,1.3,Iris-versicolor 6.3,3.3,4.7,1.6,Iris-versicolor 4.9,2.4,3.3,1.0,Iris-versicolor 6.6,2.9,4.6,1.3,Iris-versicolor 5.2,2.7,3.9,1.4,Iris-versicolor 5.0,2.0,3.5,1.0,Iris-versicolor 5.9,3.0,4.2,1.5,Iris-versicolor 6.0,2.2,4.0,1.0,Iris-versicolor 6.1,2.9,4.7,1.4,Iris-versicolor 5.6,2.9,3.6,1.3,Iris-versicolor 6.7,3.1,4.4,1.4,Iris-versicolor 5.6,3.0,4.5,1.5,Iris-versicolor 5.8,2.7,4.1,1.0,Iris-versicolor 6.2,2.2,4.5,1.5,Iris-versicolor 5.6,2.5,3.9,1.1,Iris-versicolor 5.9,3.2,4.8,1.8,Iris-versicolor 6.1,2.8,4.0,1.3,Iris-versicolor 6.3,2.5,4.9,1.5,Iris-versicolor 6.1,2.8,4.7,1.2,Iris-versicolor 6.4,2.9,4.3,1.3,Iris-versicolor 6.6,3.0,4.4,1.4,Iris-versicolor 6.8,2.8,4.8,1.4,Iris-versicolor 6.7,3.0,5.0,1.7,Iris-versicolor 6.0,2.9,4.5,1.5,Iris-versicolor 5.7,2.6,3.5,1.0,Iris-versicolor 5.5,2.4,3.8,1.1,Iris-versicolor 5.5,2.4,3.7,1.0,Iris-versicolor 5.8,2.7,3.9,1.2,Iris-versicolor 6.0,2.7,5.1,1.6,Iris-versicolor 5.4,3.0,4.5,1.5,Iris-versicolor 6.0,3.4,4.5,1.6,Iris-versicolor 6.7,3.1,4.7,1.5,Iris-versicolor 6.3,2.3,4.4,1.3,Iris-versicolor 5.6,3.0,4.1,1.3,Iris-versicolor 5.5,2.5,4.0,1.3,Iris-versicolor 5.5,2.6,4.4,1.2,Iris-versicolor 6.1,3.0,4.6,1.4,Iris-versicolor 5.8,2.6,4.0,1.2,Iris-versicolor 5.0,2.3,3.3,1.0,Iris-versicolor 5.6,2.7,4.2,1.3,Iris-versicolor 5.7,3.0,4.2,1.2,Iris-versicolor 5.7,2.9,4.2,1.3,Iris-versicolor 6.2,2.9,4.3,1.3,Iris-versicolor 5.1,2.5,3.0,1.1,Iris-versicolor 5.7,2.8,4.1,1.3,Iris-versicolor 6.3,3.3,6.0,2.5,Iris-virginica 5.8,2.7,5.1,1.9,Iris-virginica 7.1,3.0,5.9,2.1,Iris-virginica 6.3,2.9,5.6,1.8,Iris-virginica 6.5,3.0,5.8,2.2,Iris-virginica 7.6,3.0,6.6,2.1,Iris-virginica 4.9,2.5,4.5,1.7,Iris-virginica 7.3,2.9,6.3,1.8,Iris-virginica 6.7,2.5,5.8,1.8,Iris-virginica 7.2,3.6,6.1,2.5,Iris-virginica 6.5,3.2,5.1,2.0,Iris-virginica 6.4,2.7,5.3,1.9,Iris-virginica 6.8,3.0,5.5,2.1,Iris-virginica 5.7,2.5,5.0,2.0,Iris-virginica 5.8,2.8,5.1,2.4,Iris-virginica 6.4,3.2,5.3,2.3,Iris-virginica 6.5,3.0,5.5,1.8,Iris-virginica 7.7,3.8,6.7,2.2,Iris-virginica 7.7,2.6,6.9,2.3,Iris-virginica 6.0,2.2,5.0,1.5,Iris-virginica 6.9,3.2,5.7,2.3,Iris-virginica 5.6,2.8,4.9,2.0,Iris-virginica 7.7,2.8,6.7,2.0,Iris-virginica 6.3,2.7,4.9,1.8,Iris-virginica 6.7,3.3,5.7,2.1,Iris-virginica 7.2,3.2,6.0,1.8,Iris-virginica 6.2,2.8,4.8,1.8,Iris-virginica 6.1,3.0,4.9,1.8,Iris-virginica 6.4,2.8,5.6,2.1,Iris-virginica 7.2,3.0,5.8,1.6,Iris-virginica 7.4,2.8,6.1,1.9,Iris-virginica 7.9,3.8,6.4,2.0,Iris-virginica 6.4,2.8,5.6,2.2,Iris-virginica 6.3,2.8,5.1,1.5,Iris-virginica 6.1,2.6,5.6,1.4,Iris-virginica 7.7,3.0,6.1,2.3,Iris-virginica 6.3,3.4,5.6,2.4,Iris-virginica 6.4,3.1,5.5,1.8,Iris-virginica 6.0,3.0,4.8,1.8,Iris-virginica 6.9,3.1,5.4,2.1,Iris-virginica 6.7,3.1,5.6,2.4,Iris-virginica 6.9,3.1,5.1,2.3,Iris-virginica 5.8,2.7,5.1,1.9,Iris-virginica 6.8,3.2,5.9,2.3,Iris-virginica 6.7,3.3,5.7,2.5,Iris-virginica 6.7,3.0,5.2,2.3,Iris-virginica 6.3,2.5,5.0,1.9,Iris-virginica 6.5,3.0,5.2,2.0,Iris-virginica 6.2,3.4,5.4,2.3,Iris-virginica 5.9,3.0,5.1,1.8,Iris-virginica
使用数据的时候注意啊
Be sure to remove the last empty line of it before running the example
官方默认是有好几个空行,不去掉读数据的时候会数组越界
- train.csv.jpeg (59.8 KB)
- 下载次数: 0
发表评论
-
tf使用model的helloworld
2019-09-18 18:37 450yum install automake autoconf l ... -
nlu在mac上的笔记(未完成)
2019-07-15 19:54 9thrift-0.8.0 :https://www.cnblo ... -
dive into deepleaning笔记
2019-07-09 19:35 457http://zh.d2l.ai/chapter_prereq ... -
tf用到的matplotlib的helloworld
2018-11-29 16:14 531https://github.com/killinux/ten ... -
tensorflow的saver
2018-11-03 18:21 473《tensorflow机器学习实战指南》的源码 https:/ ... -
mac下tensorflow的helloworld(conda)
2018-10-16 14:41 446基本的 virtualenv tensorenv sourc ... -
tensorflow模拟仿真
2018-06-05 20:22 697TensorFlow 不仅仅是用来机器学习,它更可以用来模拟仿 ... -
tensorflow的helloworld
2018-05-18 15:42 516安装 easy_install pip virtua ...
相关推荐
四个机器学习实验,主要涉及简单的线性回归、朴素贝叶斯分类器、支持向量机、CNN做文本分类。 内附实验指导书、讲解PPT、参考代码 1、实验讲解PPT 4份 实验一 线性回归模型实验指导 实验二 支持向量机模型实验指导 ...
本文实例为大家分享了tensorflow实现线性回归的具体代码,供大家参考,具体内容如下 一、随机生成1000个点,分布在y=0.1x+0.3直线周围,并画出来 import tensorflow as tf import numpy as np import matplotlib....
1. **线性回归**:这是最基本的机器学习问题,TensorFlow可以轻松处理。在这个例子中,可能会展示如何定义输入和输出变量,创建线性模型,以及使用梯度下降法进行优化。 2. **逻辑回归**:逻辑回归用于分类问题,...
项目中的三个demo可能分别覆盖了这些基本概念,每个demo都有其特定的机器学习任务,如线性回归、逻辑回归或简单的神经网络。通过运行这些示例,你可以更好地理解TensorFlow的工作原理及其在PyCharm中的应用。 在...
3.1 线性回归原理3.2 逻辑回归原理3.3 逻辑回归的目标函数3.4 梯度下降法3.5 随机梯度下降法 【朴素贝叶斯】 4.1 朴素贝叶斯原理4.2 朴素贝叶斯与垃圾邮件识别问题4.3 为什么叫朴素?4.4 文本与单词的表示4.5 tf-idf...
4. 回归算法:线性回归、逻辑回归、随机森林等,用于预测连续数值或离散类别的概率。 5. 序列模式挖掘:如PrefixSpan、GSP,用于发现时间序列数据中的模式。 6. 文本挖掘:TF-IDF、LDA等,用于主题建模和情感分析。 ...
我的一个tensorflow入门教程,包括一些tf的基本语法和tf搭建一些常用网络模型的demo。使用tensorflow 搭建常见的网络模型。 包括: 线性回归 逻辑回归 神经网络 卷积神经网络 循环神经网络 LSTM长短记忆模型...
在本实践项目中,我们关注的是“GBDT+LR-Demo.zip”,这是一份使用TensorFlow实现推荐算法的示例代码。GBDT(Gradient Boosting Decision Tree)与LR(Logistic Regression)的结合通常用于提升模型预测性能,尤其是...
一元线性回归是一种常见的机器学习问题,旨在找到一条直线来拟合数据集中的点。在TensorFlow中实现一元线性回归主要包括以下几个步骤: - **定义模型参数**:定义斜率和截距作为模型参数。 - **构建模型**:根据...
线性回归 - logistic回归 - 感知机 - SVM(SMO) - 神经网络 决策树 - Adaboost kNN - 朴素贝叶斯 EM - HMM - 条件随机场 kMeans - PCA ROC曲线&AUC值 Stacking(demo) 计算IOU 参考:《机器学习》周志华 《统计学习...
在 "Tensorflow_Demo: TensorFlow 学习笔记" 中,我们将深入探讨 TensorFlow 的核心概念、基本操作以及如何在 Jupyter Notebook 环境中进行实践。 **1. TensorFlow 核心概念** - **张量(Tensor)**: TensorFlow ...
支持向量机(Support Vector Machine, SVM)是一种广泛应用于机器学习领域的监督学习算法,尤其在分类和回归问题中表现出色。XSS(Cross-Site Scripting)是网络安全领域中的常见漏洞,攻击者通过注入恶意脚本,使得...