- 浏览: 2073676 次
- 性别:
- 来自: NYC
文章分类
- 全部博客 (628)
- Linux (53)
- RubyOnRails (294)
- HTML (8)
- 手册指南 (5)
- Mysql (14)
- PHP (3)
- Rails 汇总 (13)
- 读书 (22)
- plugin 插件介绍与应用 (12)
- Flex (2)
- Ruby技巧 (7)
- Gem包介绍 (1)
- javascript Jquery ext prototype (21)
- IT生活 (6)
- 小工具 (4)
- PHP 部署 drupal (1)
- javascript Jquery sort plugin 插件 (2)
- iphone siri ios (1)
- Ruby On Rails (106)
- 编程概念 (1)
- Unit Test (4)
- Ruby 1.9 (24)
- rake (1)
- Postgresql (6)
- ruby (5)
- respond_to? (1)
- method_missing (1)
- git (8)
- Rspec (1)
- ios (1)
- jquery (1)
- Sinatra (1)
最新评论
-
dadadada2x:
user模型里加上 protected def email ...
流行的权限管理 gem devise的定制 -
Sev7en_jun:
shrekting 写道var pattern = /^(0| ...
强悍的ip格式 正则表达式验证 -
jiasanshou:
好文章!!!
RPM包rpmbuild SPEC文件深度说明 -
寻得乐中乐:
link_to其实就是个a标签,使用css控制,添加一个参数: ...
Rails在link_to中加参数 -
aiafei0001:
完全看不懂,不知所然.能表达清楚一点?
"$ is not defined" 的问题怎么办
普通的mongel正常的大文件上传,会遇到内存耗尽,Rails服务阻塞的问题
下面是一些方案
Rails and Large, Large file Uploads: looking at the alternatives
Uploading files in rails is a relatively easy task. There are a lot of helpers to manage this even more flexible, such as attachment_fu or paperclip. But what happens if your upload *VERY VERY LARGE* files (say 5GB) in rails, do the standard solutions apply? The main thing is that we want to avoid load file in memory strategies and avoid multiple temporary file writes.
This document describes our findings of uploading these kind of files in a rails environment. We tried the following alternatives:
Using Webrick
Using Mongrel
Using Merb
Using Mongrel Handlers
Using Sinatra
Using Rack Metal
Using Mod_Rails aka Passenger
Non-Rails Alternatives
(original image from http://www.masternewmedia.org/)
And i'm afraid, the new is not that good. For now....
A simple basic Upload Handle (to get started)
Ok , let's make a little upload application . (loosely based upon http://www.tutorialspoint.com/ruby-on-rails/rails-file-uploading.htm
Install rails (just to show you the version I used)
The second create the view to have file upload form in the browser. Note the multipart parameter to do a POST
Last is to create the model , to save the uploaded file to public/data. Note the orignal_filename we use to
Before we startup we create the public/data dir
Point your browser to http://localhost:3000/upload and you can upload a file. If all goes well, there should be a file public/data with the same name as your file that your uploaded.
Scripting a large Upload
Browser have their limitations for file uploads. Depending on if your working on 64Bit OS, 64 Bit Browser , you can upload larger files. But 2GB seems to be the limit.
For scripting the upload we will use curl to do the same thing. To upload a file called large.zip to our form, you can use:
curl -Fuploadform['datafile']=@large.zip http://localhost:3000/upload/upload
If you would use this, rails would throw the following error: "ActionController::InvalidAuthenticityToken (ActionController::InvalidAuthenticityToken):"
As described in http://ryandaigle.com/articles/2007/9/24/what-s-new-in-edge-rails-better-cross-site-request-forging-prevention is is used to protect rails against cross site request forging. We need to have rails skip this filter.
#app/controller/upload_controller.rb
class UploadController < ApplicationController
skip_before_filter :verify_authenticity_token
Webrick and Large File Uploads
Webrick is the default webserver that ships with rails. Now let's upload a large file and see what happens.
Ok, it's natural that this takes longer to handle. But if you zoom on the memory usage of your ruby process, f.i. with top
7895 ruby 16.0% 0:26.61 2 33 144 559M 188K 561M 594M
====> Memory GROWS: We see that the ruby process is growing and growing. I guess it is because webrick loads the body in a string first.
=====> Files get written to disk Multiple times for the Multipart parsing: When the file is upload, you see message appearing in the webrick log. It has a file in /var/folder/EI/....
Processing UploadController#upload (for ::1 at 2009-04-09 13:51:23) [POST]
Parameters: {"commit"=>"Create", "authenticity_token"=>"rf4V5bmHpxG74q6ueI3hUjJzwhTLUJCp9VO1uMV1Rd4=", "uploadform"=>{"datafile"=>#<File:/var/folders/EI/EIPLmNwOEea96YJDLHTrhU+++TI/-Tmp-/RackMultipart.7895.1>}}
[2009-04-09 14:09:03] INFO WEBrick::HTTPServer#start: pid=7974 port=3000
It turns out, that the part that handles the multipart, writes the files to disk in the $TMPDIR. It creates files like
$ ls $TMPDIR/
RackMultipart.7974.0
RackMultipart.7974.1
Strange, two times? We only uploaded one file? I figure this is handled by the rack/utils.rb bundled in action_controller. Possible related is this bug described at https://rails.lighthouseapp.com/projects/8994/tickets/1904-rack-middleware-parse-request-parameters-twice
Optimizing the last write to disk
Instead of
# write the file
File.open(path, "wb") { |f| f.write(upload['datafile'].read) }
We can use the following to avoid writing to disks our selves
FileUtils.mv upload['datafile'].path, path
This makes use from the fact that the file is allready on disk, and a file move is much faster then rewriting the file.
Still this might not be usable in all cases: If your TMPDIR is on another filesystem then your final destination, this trick won't help you.
Mongrel and Large File Uploads The behaviour of Webrick allready was discussed on the mongrel mailinglist http://osdir.com/ml/lang.ruby.mongrel.general/2007-10/msg00096.html And is supposed to be fixed. So let's install mongrell
Ok, let's start the upload again using our curl:
======> Memory does not grow: that's good news.
======> 4 file writes! for 1 upload : because Mongrel does not keep the upload in memory, it writes it to a tempfile in the $TMPDIR. Depending on the size of the file, > MAX_BODY it will create a tempfile or just a string in memory
lib/mongrel/const.rb
In our tests, we saw that aside from the RackMultipart.<pid>.x files, there is additional file written in $TMPDIR: mongrel.<pi>.0
That means that for 5 GB, we now have 4x 5GB : 1 mongrel + 2 RackMultipart + 1 final file (depending on the move or not)= 20 GB
======> Not reliable , predictable results?
Also, we saw the upload sometimes: mongrel did not create the RackMultiparts but CGI.<pid>.0 . Unsure what the reasons is. Merb and Large File Uploads
One of the solutions you see for handling file uploads is using Merb, the main reason that there is less blocking of your handlers.
http://www.idle-hacking.com/2007/09/scalable-file-uploads-with-merb/
http://devblog.rorcraft.com/2008/8/25/uploading-large-files-to-rails-with-merb
http://blog.vixiom.com/2007/06/29/merb-on-air-drag-and-drop-multiple-file-upload/
Let's try this:
We need to create the controller, but this a bit different from our original controller:
the file is called upload.rb instead of upload_controller.rb
removed the skip_before
in Merb it is Application and not ApplicationController
The model looks like this:
Remove the ActiveRecord
include DataMapper::Resource
original_filename does not exist: merb passes it in the variable filename
tempfile is also changed on how merb passes the temporary file
We create the public/data
$ mkdir public/data
And start merb .
$ merb
~ Connecting to database...
~ Loaded slice 'MerbAuthSlicePassword' ...
~ Parent pid: 57318
~ Compiling routes...
~ Activating slice 'MerbAuthSlicePassword' ...
merb : worker (port 4000) ~ Starting Mongrel at port 4000
When you start the upload, a merb worker becomes active.
=====> No memory increases : good!
merb : worker (port 4000) ~ Successfully bound to port 4000
=====> 3 Filewrites: 1 mongrel + 1 merb + 1 final write
Mongrel first start writing its mongrel.<pid>.0 in our $TMPDIR/
merb : worker (port 4000) ~ Params: {"format"=>nil, "action"=>"upload", "id"=>nil, "controller"=>"upload", "uploadform"=>{"datafile"=>{"content_type"=>"application/octet-stream",
"size"=>306609434, "tempfile"=>#<File:/var/folders/EI/EIPLmNwOEea96YJDLHTrhU+++TI/-Tmp-/Merb.13243.0>, "filename"=>"large.zip"}}}
merb : worker (port 4000) ~
After that Merb handles the multipart stream and writes once in $TMPDIR/Merb.<pid>.0
Sinatra and Large Files:
Sinatra is a simple framework for describing the controllers yourself. Because it seemed to have direct access to the stream, I hoped that i would be able to stream it directly without the MultiPart of Rack.
http://technotales.wordpress.com/2008/03/05/sinatra-the-simplest-thing-that-could-possibly-work/
http://m.onkey.org/2008/11/10/rails-meets-sinatra
http://www.slideshare.net/jiang.wu/ruby-off-rails
http://sinatra-book.gittr.com/
First step install sinatra:
$ gem install sinatra
Successfully installed sinatra-0.9.1.1
1 gem installed
Installing ri documentation for sinatra-0.9.1.1...
Installing RDoc documentation for sinatra-0.9.1.1...
Create a sample upload handler:
$ ruby upload-sinatra.rb
== Sinatra/0.9.1.1 has taken the stage on 4567 for development with backup from Mongrel
So instead of 3000 it listens on 4567
====> No memory increase: good!
====> 4 file writes: Again we see 4= 1 Mongrel.<pid>.* + 2 x Multipart.<pid>.* + 1 file write
Using Mongrel handlers to bypass other handlers
Up until now, we have the webserver, the multipart parser and the final write. So how can we skip the webserver or the multipart writing to disk and not consuming all the memory.
I found another approach by using a standalone mongrel handler:
http://rubyenrails.nl/articles/2007/12/24/rails-mvc-aan-je-laars-lappen-met-mongrel-handlers
http://www.ruby-forum.com/topic/128070
This allows you to interact with the incoming stream before Rack/Multipart kicks in.
Let's create an example Mongrel Handler. It's just the part that shows you that you can access the request directly:
=====>No memory increase: good!
=====>1 FILE and direct access, but still needs multipart parsing:
It turns out that request.body.path is the mongrel.<pid>.0 file , giving us directly access to the first uploaded file.
request.body.path = /var/folders/EI/EIPLmNwOEea96YJDLHTrhU+++TI/-Tmp-/mongrel.93690.0
Using Rails Metal Metal is an addition to Rails 2.3 that allows you to bypass the rack.
http://soylentfoo.jnewland.com/articles/2008/12/16/rails-metal-a-micro-framework-with-the-power-of-rails-m
http://railscasts.com/episodes/150-rails-metal
http://www.pathf.com/blogs/2009/03/uploading-files-to-rails-metal/
http://www.ruby-forum.com/topic/171070
Similar to the Mongrel HTTP Handler, we can have access to the mongrel file upload by
env["rack.input"].path = actually the /var/folders/EI/EIPLmNwOEea96YJDLHTrhU+++TI/-Tmp-/mongrel.81685.0
If we want to parse this, we can pass the env to the Request.new but this kicks in the RackMultipart again.
request = Rack::Request.new(env)
puts request.POST
#uploaded_file = request.POST["file"][:tempfile].read
=====>No memory increase: good!
=====>1 FILE and direct access, but still needs multipart parsing
=====>Can still run traditional rails and metal rails in the same webserver
Using Mod_rails aka Passenger
Mod_rails seems to become the new standard for running rails applications without the blocking hassle using plain apache as a good stable proven technology.
One of the main benefits that it doesn't block the handler to send response back until the complete request is handled. Sounds like good technology here!
http://www.pathf.com/blogs/2009/03/uploading-files-to-rails-metal/
curl -v -F datafile['file']=@large.zip http://localhost:80/
* About to connect() to localhost port 80
* Trying 127.0.0.1... connected
* Connected to localhost (127.0.0.1) port 80
> POST /datafiles HTTP/1.1
> User-Agent: curl/7.15.5 (x86_64-redhat-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5
> Host: localhost
> Accept: */*
> Content-Length: 421331151
> Expect: 100-continue
> Content-Type: multipart/form-data; boundary=----------------------------1bf75aea2f35
>
< HTTP/1.1 100 Continue
Setting up mod_rails is beyond the scope of this document. So we assume you have it working for your rails app.
in my /etc/httpd/conf/httpd.conf
LoadModule passenger_module /opt/ruby-enterprise-1.8.6-20090201/lib/ruby/gems/1.8/gems/passenger-2.1.3/ext/apache2/mod_passenger.so
PassengerRoot /opt/ruby-enterprise-1.8.6-20090201/lib/ruby/gems/1.8/gems/passenger-2.1.3
PassengerRuby /opt/ruby-enterprise-1.8.6-20090201/bin/ruby
Mod_rails has a nice setting that you can specify your Tmpdir per virtual host:
See http://www.modrails.com/documentation/Users%20guide.html#_passengertempdir_lt_directory_gt for more details
5.10. PassengerTempDir <directory>
Specifies the directory that Phusion Passenger should use for storing temporary files. This includes things such as Unix socket files, buffered file uploads, etc.
This option may be specified once, in the global server configuration. The default temp directory that Phusion Passenger uses is /tmp.
This option is especially useful if Apache is not allowed to write to /tmp (which is the case on some systems with strict SELinux policies) or if the partition that /tmp lives on doesn’t have enough disk space.
Ok let's start the upload and see what happens:
=====> Memory goes up!
30847 4 14.1 MB 0.1 MB /opt/ruby-enterprise-1.8.6-20090201/lib/ruby/gems/1.8/gems/passenger-2.1.3/ext/apache2/ApplicationPoolServerExecutable 0
/opt/ruby-enterprise-1.8.6-20090201/lib/ruby/gems/1.8/gems/passenger-2.1.3/bin/passenger-spawn-server /opt/ruby-enterprise-1.8.6-20090201/bin/ruby
/tmp/passenger.30840/info/status.fifo
30848 1 87.7 MB ? Passenger spawn server
30888 1 123.6 MB 0.0 MB Passenger ApplicationSpawner: /home/myrailsapp
30892 1 1777.4 MB 847.5 MB Rails: /home/myrailsapp
### Processes: 4
### Total private dirty RSS: 847.62 MB (?)
Very strange: in the /opt/ruby-enterprise-1.8.6-20090201/lib/ruby/gems/1.8/gems/passenger-2.1.3/ext/apache2/Hooks.cpp of the passenger source
the part expectionUploadData is the one that sends the
> Expect: 100-continue
But is seems curl, isn't handling this request, it keeps on streaming the file, ignoring the response.
To avoid having mod_Rails sending this, we can fall back to http/1.0 using -0 on the curl options.
Now the correct mechanism happens.
/tmp/passenger.1291/backends/backend.g0mi40ARBFbEdb08pxB3uzyh3JJyfR1eaI9xPuQwyLEd3NjQ24rbpSBb9FrZfNX5WI5VYQ
====> Memory doesn't go up: good! (again)
====> Same number of files = 1 /tmp/passenger + similar to previous examples
The alternatives: (non-rails)
The problem so far, is mainly a problem of implementation, there is no reason why streaming a file upload would not be possible in rails.
The correct hooks for streaming the file directly to a handler without temporary files or memory, are currently just not there.
I hope eventually we will see an Upload streaming API (similar to the download Stream API) and a streamable Multipart handler.
Alternative 1: have the webserver handle our stream directly
http://apache.webthing.com/mod_upload/: a apache module for doing uploads directly in the webserver
http://www.motionstandingstill.com/nginx-upload-awesomeness/2008-08-13/: a nginx module for doing uploads
Alternative 2: Write our own httpserver in ruby:
Using a Raw HTTP Server, Plain sockets to implement webserver, http://lxscmn.com/tblog/?p=25
Alternative 3: Use apache commons fileupload component in ruby
This component is exactly what we need in rails/ruby. http://commons.apache.org/fileupload/
Up until now, this is what we will use. It has streamable API for both the incoming AND the multiparts!
Read more at http://www.jedi.be/blog/2009/04/10/ruby-servlets-and-large-large-file-uploads-enter-apache-fileupload/
http://www.jedi.be/blog/2009/04/10/rails-and-large-large-file-uploads-looking-at-the-alternatives/
rails能上传文件的插件有PaperClip,UploadColumn ,Acts As Attachment,Attachment Fu,File Column,FlexImage,ActiveUpload等等,但这些插件都是为网站上传图片,文档等小文件用的,一般都是几M的概念,上传大文件同样会碰到阻塞整个rails应用的情况。
所以解决的办法也许需要从其它方面考虑。
1、更换服务器->尝试这个不是解决的办法
默认的Webrick:这个没法用,浏览器直接挂掉,windows下还可以强制关掉,ubuntu下直接死机了。
Mongrel:发现mongrel能上传,windows下能正常浏览其它网页,但是很慢,大概需要50000-70000ms(电脑主频1.83GHz,2G内存,上传文件为ubuntu10.04LTS的iso:700M),而且会产生同样大小的临时文件,所以会很占磁盘资源。
thin:仅用thin服务器,与mongrel差不多,大概也需要5万多ms(环境同mongrel),同样产生与上传文件一样大小的临时文件,在ubuntu下比mongrel稍微好一点,除刚开始点击上传一刻,比较卡,以后的时间基本不影响浏览其它网页,或运行其它程序。
nginx + thin,开三个thin,用mongrel启动rails,产生两倍源文件的临时文件,上传效果同前。
Nginx upload module:
这个模块默认上传文件大小是1M,但官方文档也没有明确说明上传文件的限制。
http://www.grid.net.ru/nginx/upload.en.html
Nginx+thin:
http://glauche.de/2008/01/12/thin-nginx-with-rails/
2、其它轻量级框架,如merb。merb被集成到到rails3,用merb单独来处理上传的请求,处理好(文件保存好)以后转给rails处理。我是在rails2.2.2版本的条件下,装了merb,也建了merb应用,但是就是运行不起来,报的错直接指向了源码,查了很多资料,说是merb的bug。但没试rails3版本的。
3、通过Flash,AS3处理
平时我们用的网盘,就像网易邮箱有网易网盘,这个也是直接把文件上传到服务器了,但网易网盘的容量一般都很小,不会超过500M。
我又搜了国内外其它一些存在的网盘:
ADrive:50G,最大单个文件可达2G,速度慢,flash上传界面。
纳米机器人:支持4G文件的上传,但这个已然是客户端形式了,跟通过web浏览器还是有区别的。
Dropbox:在国外很受欢迎,一般都是上传几百M的东西,上传使用的方法也是通过flash上传组件。
网盘的大文件上传,我觉得挺适合我们这种情况的,我找了些资料,了解到网盘中有相当一部分是通过客户端软件,或者直接用FTP传输的。
通过网络传输的话,还要考虑到中途被打断的问题。大文件传输,需要消耗的时间一般都比较长,这个需要处理,即服务器端需要支持断点续传。
考虑SFTP。
下面是一些方案
require 'net/ftp' Net::FTP.open('uploads.yoursite.com','username','password') {|ftp| ftp.login('username','password') ftp.put 'filename' if (ftp.last_response != "266 Transfer complete.\n") puts "Error with FTP upload\nResponse was: #{ftp.last_response}" end }
Rails and Large, Large file Uploads: looking at the alternatives
Uploading files in rails is a relatively easy task. There are a lot of helpers to manage this even more flexible, such as attachment_fu or paperclip. But what happens if your upload *VERY VERY LARGE* files (say 5GB) in rails, do the standard solutions apply? The main thing is that we want to avoid load file in memory strategies and avoid multiple temporary file writes.
This document describes our findings of uploading these kind of files in a rails environment. We tried the following alternatives:
Using Webrick
Using Mongrel
Using Merb
Using Mongrel Handlers
Using Sinatra
Using Rack Metal
Using Mod_Rails aka Passenger
Non-Rails Alternatives
(original image from http://www.masternewmedia.org/)
And i'm afraid, the new is not that good. For now....
A simple basic Upload Handle (to get started)
Ok , let's make a little upload application . (loosely based upon http://www.tutorialspoint.com/ruby-on-rails/rails-file-uploading.htm
Install rails (just to show you the version I used)
$ gem install rails Successfully installed rake-0.8.4 Successfully installed activesupport-2.3.2 Successfully installed activerecord-2.3.2 Successfully installed actionpack-2.3.2 Successfully installed actionmailer-2.3.2 Successfully installed activeresource-2.3.2 Successfully installed rails-2.3.2
$ gem install sqlite3-ruby $ rails upload-test $ cd upload-test $ script/generate controller Upload exists app/controllers/ exists app/helpers/ create app/views/upload exists test/functional/ create test/unit/helpers/ create app/controllers/upload_controller.rb create test/functional/upload_controller_test.rb create app/helpers/upload_helper.rb create test/unit/helpers/upload_helper_test.rbThe first step is to create controller that has two actions, on 'index' it will show a form "uploadfile.html.erb' and the action 'upload' will handle the upload
#app/controller/upload_controller.rb class UploadController < ApplicationController def index render :file => 'app/views/upload/uploadfile.html.erb' end def upload post = Datafile.save(params[:uploadform]) render :text => "File has been uploaded successfully" end end
The second create the view to have file upload form in the browser. Note the multipart parameter to do a POST
#app/views/upload/uploadfile.html.erb <% form_for :uploadform, :url => { :action => 'upload'}, :html => {:multipart => true} do |f| %> <%= f.file_field :datafile %><br /> <%= f.submit 'Create' %> <% end %>
Last is to create the model , to save the uploaded file to public/data. Note the orignal_filename we use to
#app/models/datafile.rb class Datafile < ActiveRecord::Base def self.save(upload) name = upload['datafile'].original_filename directory = "public/data" # create the file path path = File.join(directory, name) # write the file File.open(path, "wb") { |f| f.write(upload['datafile'].read) } end end
Before we startup we create the public/data dir
$ mkdir public/data $ ./script server webrick => Booting WEBrick => Rails 2.3.2 application starting on http://0.0.0.0:3000 => Call with -d to detach => Ctrl-C to shutdown server [2009-04-10 13:18:27] INFO WEBrick 1.3.1 [2009-04-10 13:18:27] INFO ruby 1.8.6 (2008-03-03) [universal-darwin9.0] [2009-04-10 13:18:27] INFO WEBrick::HTTPServer#start: pid=5057 port=3000
Point your browser to http://localhost:3000/upload and you can upload a file. If all goes well, there should be a file public/data with the same name as your file that your uploaded.
Scripting a large Upload
Browser have their limitations for file uploads. Depending on if your working on 64Bit OS, 64 Bit Browser , you can upload larger files. But 2GB seems to be the limit.
For scripting the upload we will use curl to do the same thing. To upload a file called large.zip to our form, you can use:
curl -Fuploadform['datafile']=@large.zip http://localhost:3000/upload/upload
If you would use this, rails would throw the following error: "ActionController::InvalidAuthenticityToken (ActionController::InvalidAuthenticityToken):"
As described in http://ryandaigle.com/articles/2007/9/24/what-s-new-in-edge-rails-better-cross-site-request-forging-prevention is is used to protect rails against cross site request forging. We need to have rails skip this filter.
#app/controller/upload_controller.rb
class UploadController < ApplicationController
skip_before_filter :verify_authenticity_token
Webrick and Large File Uploads
Webrick is the default webserver that ships with rails. Now let's upload a large file and see what happens.
Ok, it's natural that this takes longer to handle. But if you zoom on the memory usage of your ruby process, f.i. with top
7895 ruby 16.0% 0:26.61 2 33 144 559M 188K 561M 594M
====> Memory GROWS: We see that the ruby process is growing and growing. I guess it is because webrick loads the body in a string first.
#gems/rails-2.3.2/lib/webrick_server.rb def handle_dispatch(req, res, origin = nil) #:nodoc: data = StringIO.new Dispatcher.dispatch( CGI.new("query", create_env_table(req, origin), StringIO.new(req.body || "")), ActionController::CgiRequest::DEFAULT_SESSION_OPTIONS, data )
=====> Files get written to disk Multiple times for the Multipart parsing: When the file is upload, you see message appearing in the webrick log. It has a file in /var/folder/EI/....
Processing UploadController#upload (for ::1 at 2009-04-09 13:51:23) [POST]
Parameters: {"commit"=>"Create", "authenticity_token"=>"rf4V5bmHpxG74q6ueI3hUjJzwhTLUJCp9VO1uMV1Rd4=", "uploadform"=>{"datafile"=>#<File:/var/folders/EI/EIPLmNwOEea96YJDLHTrhU+++TI/-Tmp-/RackMultipart.7895.1>}}
[2009-04-09 14:09:03] INFO WEBrick::HTTPServer#start: pid=7974 port=3000
It turns out, that the part that handles the multipart, writes the files to disk in the $TMPDIR. It creates files like
$ ls $TMPDIR/
RackMultipart.7974.0
RackMultipart.7974.1
Strange, two times? We only uploaded one file? I figure this is handled by the rack/utils.rb bundled in action_controller. Possible related is this bug described at https://rails.lighthouseapp.com/projects/8994/tickets/1904-rack-middleware-parse-request-parameters-twice
#gems/actionpack-2.3.2/lib/action_controller/vendor/rack-1.0/rack/utils.rb # Stolen from Mongrel, with some small modifications: def self.parse_multipart(env) write multi
Optimizing the last write to disk
Instead of
# write the file
File.open(path, "wb") { |f| f.write(upload['datafile'].read) }
We can use the following to avoid writing to disks our selves
FileUtils.mv upload['datafile'].path, path
This makes use from the fact that the file is allready on disk, and a file move is much faster then rewriting the file.
Still this might not be usable in all cases: If your TMPDIR is on another filesystem then your final destination, this trick won't help you.
Mongrel and Large File Uploads The behaviour of Webrick allready was discussed on the mongrel mailinglist http://osdir.com/ml/lang.ruby.mongrel.general/2007-10/msg00096.html And is supposed to be fixed. So let's install mongrell
$ gem install mongrel Successfully installed gem_plugin-0.2.3 Successfully installed daemons-1.0.10 Successfully installed fastthread-1.0.7 Successfully installed cgi_multipart_eof_fix-2.5.0 Successfully installed mongrel-1.1.5 $ mongrel_rails start
Ok, let's start the upload again using our curl:
======> Memory does not grow: that's good news.
======> 4 file writes! for 1 upload : because Mongrel does not keep the upload in memory, it writes it to a tempfile in the $TMPDIR. Depending on the size of the file, > MAX_BODY it will create a tempfile or just a string in memory
lib/mongrel/const.rb
# This is the maximum header that is allowed before a client is booted. The parser detects # this, but we'd also like to do this as well. MAX_HEADER=1024 * (80 + 32) # Maximum request body size before it is moved out of memory and into a tempfile for reading. MAX_BODY=MAX_HEADER lib/mongrel/http_request.rb # must read more data to complete body if remain > Const::MAX_BODY # huge body, put it in a tempfile @body = Tempfile.new(Const::MONGREL_TMP_BASE) @body.binmode else # small body, just use that @body = StringIO.new end
In our tests, we saw that aside from the RackMultipart.<pid>.x files, there is additional file written in $TMPDIR: mongrel.<pi>.0
That means that for 5 GB, we now have 4x 5GB : 1 mongrel + 2 RackMultipart + 1 final file (depending on the move or not)= 20 GB
======> Not reliable , predictable results?
Also, we saw the upload sometimes: mongrel did not create the RackMultiparts but CGI.<pid>.0 . Unsure what the reasons is. Merb and Large File Uploads
One of the solutions you see for handling file uploads is using Merb, the main reason that there is less blocking of your handlers.
http://www.idle-hacking.com/2007/09/scalable-file-uploads-with-merb/
http://devblog.rorcraft.com/2008/8/25/uploading-large-files-to-rails-with-merb
http://blog.vixiom.com/2007/06/29/merb-on-air-drag-and-drop-multiple-file-upload/
Let's try this:
$ gem install merb Successfully installed dm-aggregates-0.9.11 Successfully installed dm-validations-0.9.11 Successfully installed randexp-0.1.4 Successfully installed dm-sweatshop-0.9.11 Successfully installed dm-serializer-0.9.11 Successfully installed merb-1.0.11 Let's create the merb application: $ merb-gen app uploader-app $ cd uploader-app
We need to create the controller, but this a bit different from our original controller:
the file is called upload.rb instead of upload_controller.rb
removed the skip_before
in Merb it is Application and not ApplicationController
#app/controllers/upload.rb class Upload < Application def index render :file => 'app/views/upload/uploadfile.rhtml' end def upload post = Datafile.save(params[:uploadform]) render :text => "File has been uploaded successfully" end end
The model looks like this:
Remove the ActiveRecord
include DataMapper::Resource
original_filename does not exist: merb passes it in the variable filename
tempfile is also changed on how merb passes the temporary file
#app/models/datafile.rb class Datafile include DataMapper::Resource def self.save(upload) name = upload['datafile']['filename'] directory = "public/data" # create the file path path = File.join(directory, name) # write the file File.open(path, "wb") { |f| f.write(upload['datafile']['tempfile'].read) } end
We create the public/data
$ mkdir public/data
And start merb .
$ merb
~ Connecting to database...
~ Loaded slice 'MerbAuthSlicePassword' ...
~ Parent pid: 57318
~ Compiling routes...
~ Activating slice 'MerbAuthSlicePassword' ...
merb : worker (port 4000) ~ Starting Mongrel at port 4000
When you start the upload, a merb worker becomes active.
=====> No memory increases : good!
merb : worker (port 4000) ~ Successfully bound to port 4000
=====> 3 Filewrites: 1 mongrel + 1 merb + 1 final write
Mongrel first start writing its mongrel.<pid>.0 in our $TMPDIR/
merb : worker (port 4000) ~ Params: {"format"=>nil, "action"=>"upload", "id"=>nil, "controller"=>"upload", "uploadform"=>{"datafile"=>{"content_type"=>"application/octet-stream",
"size"=>306609434, "tempfile"=>#<File:/var/folders/EI/EIPLmNwOEea96YJDLHTrhU+++TI/-Tmp-/Merb.13243.0>, "filename"=>"large.zip"}}}
merb : worker (port 4000) ~
After that Merb handles the multipart stream and writes once in $TMPDIR/Merb.<pid>.0
Sinatra and Large Files:
Sinatra is a simple framework for describing the controllers yourself. Because it seemed to have direct access to the stream, I hoped that i would be able to stream it directly without the MultiPart of Rack.
http://technotales.wordpress.com/2008/03/05/sinatra-the-simplest-thing-that-could-possibly-work/
http://m.onkey.org/2008/11/10/rails-meets-sinatra
http://www.slideshare.net/jiang.wu/ruby-off-rails
http://sinatra-book.gittr.com/
First step install sinatra:
$ gem install sinatra
Successfully installed sinatra-0.9.1.1
1 gem installed
Installing ri documentation for sinatra-0.9.1.1...
Installing RDoc documentation for sinatra-0.9.1.1...
Create a sample upload handler:
#sinatra-test-upload.rb require 'rubygems' require 'sinatra' post '/upload' do File.open("/tmp/theuploadedfile","wb") { |f| f.write(params[:datafile]['file'].read) } end
$ ruby upload-sinatra.rb
== Sinatra/0.9.1.1 has taken the stage on 4567 for development with backup from Mongrel
So instead of 3000 it listens on 4567
====> No memory increase: good!
====> 4 file writes: Again we see 4= 1 Mongrel.<pid>.* + 2 x Multipart.<pid>.* + 1 file write
Using Mongrel handlers to bypass other handlers
Up until now, we have the webserver, the multipart parser and the final write. So how can we skip the webserver or the multipart writing to disk and not consuming all the memory.
I found another approach by using a standalone mongrel handler:
http://rubyenrails.nl/articles/2007/12/24/rails-mvc-aan-je-laars-lappen-met-mongrel-handlers
http://www.ruby-forum.com/topic/128070
This allows you to interact with the incoming stream before Rack/Multipart kicks in.
Let's create an example Mongrel Handler. It's just the part that shows you that you can access the request directly:
require 'rubygems' require 'mongrel' class HelloWorldHandler < Mongrel::HttpHandler def process(request, response) puts request.body.path response.start(200) do |head,out| head['Content-Type'] = "text/plain" out << "Hello world!" end end def request_progress (params, clen, total) end end Mongrel::Configurator.new do listener :port => 3000 do uri "/", :handler => HelloWorldHandler.new end run; join end
=====>No memory increase: good!
=====>1 FILE and direct access, but still needs multipart parsing:
It turns out that request.body.path is the mongrel.<pid>.0 file , giving us directly access to the first uploaded file.
request.body.path = /var/folders/EI/EIPLmNwOEea96YJDLHTrhU+++TI/-Tmp-/mongrel.93690.0
Using Rails Metal Metal is an addition to Rails 2.3 that allows you to bypass the rack.
http://soylentfoo.jnewland.com/articles/2008/12/16/rails-metal-a-micro-framework-with-the-power-of-rails-m
http://railscasts.com/episodes/150-rails-metal
http://www.pathf.com/blogs/2009/03/uploading-files-to-rails-metal/
http://www.ruby-forum.com/topic/171070
# Allow the metal piece to run in isolation require(File.dirname(__FILE__) + "/../../config/environment") unless defined?(Rails) class Uploader def self.call(env) if env["PATH_INFO"] =~ /^\/uploader/ puts env["rack.input"].path [200, {"Content-Type" => "text/html"}, ["It worked"]] else [400, {"Content-Type" => "text/html"}, ["Error"]] end end end
Similar to the Mongrel HTTP Handler, we can have access to the mongrel file upload by
env["rack.input"].path = actually the /var/folders/EI/EIPLmNwOEea96YJDLHTrhU+++TI/-Tmp-/mongrel.81685.0
If we want to parse this, we can pass the env to the Request.new but this kicks in the RackMultipart again.
request = Rack::Request.new(env)
puts request.POST
#uploaded_file = request.POST["file"][:tempfile].read
=====>No memory increase: good!
=====>1 FILE and direct access, but still needs multipart parsing
=====>Can still run traditional rails and metal rails in the same webserver
Using Mod_rails aka Passenger
Mod_rails seems to become the new standard for running rails applications without the blocking hassle using plain apache as a good stable proven technology.
One of the main benefits that it doesn't block the handler to send response back until the complete request is handled. Sounds like good technology here!
http://www.pathf.com/blogs/2009/03/uploading-files-to-rails-metal/
curl -v -F datafile['file']=@large.zip http://localhost:80/
* About to connect() to localhost port 80
* Trying 127.0.0.1... connected
* Connected to localhost (127.0.0.1) port 80
> POST /datafiles HTTP/1.1
> User-Agent: curl/7.15.5 (x86_64-redhat-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5
> Host: localhost
> Accept: */*
> Content-Length: 421331151
> Expect: 100-continue
> Content-Type: multipart/form-data; boundary=----------------------------1bf75aea2f35
>
< HTTP/1.1 100 Continue
Setting up mod_rails is beyond the scope of this document. So we assume you have it working for your rails app.
in my /etc/httpd/conf/httpd.conf
LoadModule passenger_module /opt/ruby-enterprise-1.8.6-20090201/lib/ruby/gems/1.8/gems/passenger-2.1.3/ext/apache2/mod_passenger.so
PassengerRoot /opt/ruby-enterprise-1.8.6-20090201/lib/ruby/gems/1.8/gems/passenger-2.1.3
PassengerRuby /opt/ruby-enterprise-1.8.6-20090201/bin/ruby
Mod_rails has a nice setting that you can specify your Tmpdir per virtual host:
See http://www.modrails.com/documentation/Users%20guide.html#_passengertempdir_lt_directory_gt for more details
5.10. PassengerTempDir <directory>
Specifies the directory that Phusion Passenger should use for storing temporary files. This includes things such as Unix socket files, buffered file uploads, etc.
This option may be specified once, in the global server configuration. The default temp directory that Phusion Passenger uses is /tmp.
This option is especially useful if Apache is not allowed to write to /tmp (which is the case on some systems with strict SELinux policies) or if the partition that /tmp lives on doesn’t have enough disk space.
Ok let's start the upload and see what happens:
=====> Memory goes up!
引用
# ./passenger-memory-stats
-------------- Apache processes ---------------
PID PPID Threads VMSize Private Name
-----------------------------------------------
30840 1 1 184.3 MB 0.0 MB /usr/sbin/httpd
30852 30840 1 186.2 MB ? /usr/sbin/httpd
30853 30840 1 184.3 MB ? /usr/sbin/httpd
30854 30840 1 184.3 MB ? /usr/sbin/httpd
30855 30840 1 184.3 MB ? /usr/sbin/httpd
30856 30840 1 184.3 MB ? /usr/sbin/httpd
30857 30840 1 184.3 MB ? /usr/sbin/httpd
30858 30840 1 184.3 MB ? /usr/sbin/httpd
30859 30840 1 184.3 MB ? /usr/sbin/httpd
### Processes: 9
### Total private dirty RSS: 0.03 MB (?)
---------- Passenger processes -----------
PID Threads VMSize Private Name
------------------------------------------
-------------- Apache processes ---------------
PID PPID Threads VMSize Private Name
-----------------------------------------------
30840 1 1 184.3 MB 0.0 MB /usr/sbin/httpd
30852 30840 1 186.2 MB ? /usr/sbin/httpd
30853 30840 1 184.3 MB ? /usr/sbin/httpd
30854 30840 1 184.3 MB ? /usr/sbin/httpd
30855 30840 1 184.3 MB ? /usr/sbin/httpd
30856 30840 1 184.3 MB ? /usr/sbin/httpd
30857 30840 1 184.3 MB ? /usr/sbin/httpd
30858 30840 1 184.3 MB ? /usr/sbin/httpd
30859 30840 1 184.3 MB ? /usr/sbin/httpd
### Processes: 9
### Total private dirty RSS: 0.03 MB (?)
---------- Passenger processes -----------
PID Threads VMSize Private Name
------------------------------------------
30847 4 14.1 MB 0.1 MB /opt/ruby-enterprise-1.8.6-20090201/lib/ruby/gems/1.8/gems/passenger-2.1.3/ext/apache2/ApplicationPoolServerExecutable 0
/opt/ruby-enterprise-1.8.6-20090201/lib/ruby/gems/1.8/gems/passenger-2.1.3/bin/passenger-spawn-server /opt/ruby-enterprise-1.8.6-20090201/bin/ruby
/tmp/passenger.30840/info/status.fifo
30848 1 87.7 MB ? Passenger spawn server
30888 1 123.6 MB 0.0 MB Passenger ApplicationSpawner: /home/myrailsapp
30892 1 1777.4 MB 847.5 MB Rails: /home/myrailsapp
### Processes: 4
### Total private dirty RSS: 847.62 MB (?)
Very strange: in the /opt/ruby-enterprise-1.8.6-20090201/lib/ruby/gems/1.8/gems/passenger-2.1.3/ext/apache2/Hooks.cpp of the passenger source
expectingUploadData = ap_should_client_block(r); if (expectingUploadData && atol(lookupHeader(r, "Content-Length")) > UPLOAD_ACCELERATION_THRESHOLD) { uploadData = receiveRequestBody(r); }
the part expectionUploadData is the one that sends the
> Expect: 100-continue
But is seems curl, isn't handling this request, it keeps on streaming the file, ignoring the response.
To avoid having mod_Rails sending this, we can fall back to http/1.0 using -0 on the curl options.
引用
$ curl -v -0 -F datafile['file']=@large.zip http://localhost:80
* About to connect() to localhost port 80
* Trying 127.0.0.1... connected
* Connected to localhost (127.0.0.1) port 80
> POST /uploader/ HTTP/1.0
> User-Agent: curl/7.15.5 (x86_64-redhat-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5
> Host: localhost
> Accept: */*
> Content-Length: 421331151
> Content-Type: multipart/form-data; boundary=----------------------------1b04b7cb6566
* About to connect() to localhost port 80
* Trying 127.0.0.1... connected
* Connected to localhost (127.0.0.1) port 80
> POST /uploader/ HTTP/1.0
> User-Agent: curl/7.15.5 (x86_64-redhat-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5
> Host: localhost
> Accept: */*
> Content-Length: 421331151
> Content-Type: multipart/form-data; boundary=----------------------------1b04b7cb6566
Now the correct mechanism happens.
/tmp/passenger.1291/backends/backend.g0mi40ARBFbEdb08pxB3uzyh3JJyfR1eaI9xPuQwyLEd3NjQ24rbpSBb9FrZfNX5WI5VYQ
====> Memory doesn't go up: good! (again)
====> Same number of files = 1 /tmp/passenger + similar to previous examples
The alternatives: (non-rails)
The problem so far, is mainly a problem of implementation, there is no reason why streaming a file upload would not be possible in rails.
The correct hooks for streaming the file directly to a handler without temporary files or memory, are currently just not there.
I hope eventually we will see an Upload streaming API (similar to the download Stream API) and a streamable Multipart handler.
Alternative 1: have the webserver handle our stream directly
http://apache.webthing.com/mod_upload/: a apache module for doing uploads directly in the webserver
http://www.motionstandingstill.com/nginx-upload-awesomeness/2008-08-13/: a nginx module for doing uploads
Alternative 2: Write our own httpserver in ruby:
Using a Raw HTTP Server, Plain sockets to implement webserver, http://lxscmn.com/tblog/?p=25
Alternative 3: Use apache commons fileupload component in ruby
This component is exactly what we need in rails/ruby. http://commons.apache.org/fileupload/
Up until now, this is what we will use. It has streamable API for both the incoming AND the multiparts!
Read more at http://www.jedi.be/blog/2009/04/10/ruby-servlets-and-large-large-file-uploads-enter-apache-fileupload/
http://www.jedi.be/blog/2009/04/10/rails-and-large-large-file-uploads-looking-at-the-alternatives/
rails能上传文件的插件有PaperClip,UploadColumn ,Acts As Attachment,Attachment Fu,File Column,FlexImage,ActiveUpload等等,但这些插件都是为网站上传图片,文档等小文件用的,一般都是几M的概念,上传大文件同样会碰到阻塞整个rails应用的情况。
所以解决的办法也许需要从其它方面考虑。
1、更换服务器->尝试这个不是解决的办法
默认的Webrick:这个没法用,浏览器直接挂掉,windows下还可以强制关掉,ubuntu下直接死机了。
Mongrel:发现mongrel能上传,windows下能正常浏览其它网页,但是很慢,大概需要50000-70000ms(电脑主频1.83GHz,2G内存,上传文件为ubuntu10.04LTS的iso:700M),而且会产生同样大小的临时文件,所以会很占磁盘资源。
thin:仅用thin服务器,与mongrel差不多,大概也需要5万多ms(环境同mongrel),同样产生与上传文件一样大小的临时文件,在ubuntu下比mongrel稍微好一点,除刚开始点击上传一刻,比较卡,以后的时间基本不影响浏览其它网页,或运行其它程序。
nginx + thin,开三个thin,用mongrel启动rails,产生两倍源文件的临时文件,上传效果同前。
Nginx upload module:
这个模块默认上传文件大小是1M,但官方文档也没有明确说明上传文件的限制。
http://www.grid.net.ru/nginx/upload.en.html
Nginx+thin:
http://glauche.de/2008/01/12/thin-nginx-with-rails/
2、其它轻量级框架,如merb。merb被集成到到rails3,用merb单独来处理上传的请求,处理好(文件保存好)以后转给rails处理。我是在rails2.2.2版本的条件下,装了merb,也建了merb应用,但是就是运行不起来,报的错直接指向了源码,查了很多资料,说是merb的bug。但没试rails3版本的。
3、通过Flash,AS3处理
平时我们用的网盘,就像网易邮箱有网易网盘,这个也是直接把文件上传到服务器了,但网易网盘的容量一般都很小,不会超过500M。
我又搜了国内外其它一些存在的网盘:
ADrive:50G,最大单个文件可达2G,速度慢,flash上传界面。
纳米机器人:支持4G文件的上传,但这个已然是客户端形式了,跟通过web浏览器还是有区别的。
Dropbox:在国外很受欢迎,一般都是上传几百M的东西,上传使用的方法也是通过flash上传组件。
网盘的大文件上传,我觉得挺适合我们这种情况的,我找了些资料,了解到网盘中有相当一部分是通过客户端软件,或者直接用FTP传输的。
通过网络传输的话,还要考虑到中途被打断的问题。大文件传输,需要消耗的时间一般都比较长,这个需要处理,即服务器端需要支持断点续传。
考虑SFTP。
评论
1 楼
orcl_zhang
2010-07-30
好东西,回去我也试下。
用nginx可以参考这里
http://iceskysl.1sters.com/?p=431
http://iceskysl.1sters.com/?p=368
用nginx可以参考这里
http://iceskysl.1sters.com/?p=431
http://iceskysl.1sters.com/?p=368
发表评论
-
Destroying a Postgres DB on Heroku
2013-04-24 10:58 932heroku pg:reset DATABASE -
VIM ctags setup ack
2012-04-17 22:13 3256reference ctags --extra=+f --e ... -
alias_method_chain方法在3.1以后的替代使用方式
2012-02-04 02:14 3290alias_method_chain() 是rails里的一个 ... -
一些快速解决的问题
2012-01-19 12:35 1470问题如下: 引用Could not open library ... -
API service 安全问题
2011-12-04 08:47 1382这是一个长期关注的课题 rest api Service的 ... -
Module方法调用好不好
2011-11-20 01:58 1345以前说,用module给class加singleton方法,和 ... -
一个ajax和rails交互的例子
2011-11-19 01:53 1903首先,这里用了一个,query信息解析的包,如下 https: ... -
Rails 返回hash给javascript
2011-11-19 01:43 2274这是一个特别的,不太正统的需求, 因为,大部分时候,ajax的 ... -
关于Rubymine
2011-11-18 23:21 2266开个帖子收集有关使用上的问题 前一段时间,看到半价就买了。想 ... -
ruby中和javascript中,动态方法的创建
2011-11-18 21:01 1237class Klass def hello(*args) ... -
textmate快捷键 汇总
2011-11-16 07:20 8139TextMate 列编辑模式 按住 Alt 键,用鼠标选择要 ... -
Ruby面试系列六,面试继续面试
2011-11-15 05:55 2019刚才受到打击了,充分报漏了自己基础不扎实,不肯向虎炮等兄弟学习 ... -
说说sharding
2011-11-13 00:53 1488这个东西一面试就有人 ... -
rails面试碎碎念
2011-11-12 23:51 1943面试继续面试 又有问ru ... -
最通常的git push reject 和non-fast forward是因为
2011-11-12 23:29 17212git push To git@github.com:use ... -
Rails 自身的many to many关系 self has_many
2011-11-12 01:43 2733简单点的 #注意外键在person上people: id ... -
Rails 3下的 in place editor edit in place
2011-11-12 01:20 945第一个版本 http://code.google.com/p ... -
Heroku 的诡异问题集合
2011-11-11 07:22 1693开个Post记录,在用heroku过程中的一些诡异问题和要注意 ... -
SCSS 和 SASS 和 HAML 和CoffeeScript
2011-11-07 07:52 12954Asset Pipeline 提供了内建 ... -
Invalid gemspec because of the date format in specification
2011-11-07 02:14 2116又是这个date format的错误。 上次出错忘了,记录下 ...
相关推荐
在Ruby on Rails框架中,Paperclip是一个非常流行的用于处理文件上传的库。它提供了一种简单而优雅的方式来管理和处理模型中的附件,如图片、文档等。Paperclip与ActiveRecord紧密集成,使得在Rails应用中添加文件...
Rails 多文件上传插件实现详解 Rails 多文件上传插件是基于 Ruby on Rails 框架的一款插件,旨在实现多文件的同时上传,控制文件的格式、数量,并且兼容多种浏览器,包括 IE6、7、Firefox 等。下面是对插件的详细...
在Rails框架中处理文件上传时,经常会遇到一个问题,那就是当用户尝试上传包含中文名称的文件时,文件名可能会出现乱码。这个问题主要是由于字符编码不兼容导致的。Rails默认使用UTF-8编码,但文件系统或者某些外部...
在Ruby on Rails框架中,...通过理解以上知识点,你将能够构建一个功能完善的Rails文件上传系统,确保用户能安全、便捷地上传和管理他们的文件。在实际项目中,还需要考虑性能优化、错误处理和用户体验等方面的问题。
在Ruby on Rails框架中,文件上传是一个常见的需求,特别是在应用的升级过程中,处理文件上传的策略可能会有所变化。Rails提供了多种处理文件上传的方法,包括直接存储到本地文件系统、使用云存储服务(如Amazon S3...
在Ruby on Rails(Rails)框架中,为文件上传添加进度条功能可以显著提升用户体验,让用户在上传大文件时能够清楚地看到进度,增加交互性。本文将深入探讨如何在Rails应用中实现这一功能。 首先,我们需要理解文件...
通过这种方式,我们可以实现无刷新的文件上传,同时保持页面的互动性和用户体验。当然,实际应用中可能还需要考虑错误处理、文件大小限制、文件类型验证等其他细节。在开发过程中,利用`source code`和`tools`来调试...
jquery-fileupload-rails, 用于 Rails的jQuery文件上传集成 Rails 文件上传jQuery-File-Plugin 是一个文件上传插件,由的Tschan 。 jQuery文件上传功能多文件选择。drag&拖放支持。进度栏和jQuery预览图像。 支持...
用于RailsHTML5文件上传器这个gem使用来上传文件。安装在Gemfile中: gem 'rails-uploader'在航线上: mount Uploader :: Engine => '/uploader' 迁移ActiveRecord: $ bundle exec rails g uploader:install用法...
这个“Rails项目源代码”是一个使用Rails构建的图片分享网站的完整源代码,它揭示了如何利用Rails的强大功能来创建一个允许用户上传、分享和浏览图片的应用。 1. **Rails框架基础**: Rails的核心理念是DRY(Don't...
在Rails中,最常用的文件上传库是Paperclip和CarrierWave,但现在更推荐使用ActiveStorage,这是Rails 5.2及更高版本内置的一个功能。ActiveStorage直接与数据库交互,方便管理和存储文件,同时支持通过第三方服务如...
例如,Devise用于用户认证,CanCanCan用于授权管理,Paperclip或Carrierwave处理文件上传,Stripe或PayPal集成支付功能,以及各种数据分析和报表生成库等。 总的来说,这个压缩包对于Ruby on Rails的初学者或希望...
upload_url: "swfuploadfile.js", // 文件上传的后端处理路径 post_params: { "utf8": "✓", "authenticity_token": "<%= form_authenticity_token %>" }, // 上传请求携带的参数 file_size_limit: "100MB", ...
7. **Rails插件与Gem**:Rails社区提供了丰富的插件和Gem,如Devise用于身份验证,Paperclip或Carrierwave处理文件上传,Resque或Sidekiq实现后台任务队列。掌握如何选择和使用这些工具来扩展应用功能。 8. **Rails...
Rails API 是一个强大的框架,用于构建高效、可扩展的Web应用程序。它基于Ruby编程语言,遵循MVC(模型-视图-控制器)架构模式,使得开发者可以轻松地处理数据存储、用户界面和业务逻辑。Rails API 特别适用于构建...
在Rails 2.2.3时代,社区已经发展出很多插件和 gems(Ruby的扩展库),如Devise用于身份验证,CanCanCan进行授权管理,Paperclip或Carrierwave处理文件上传等。虽然这些可能需要适配老版本,但它们能极大地增强你的...
在这个场景中,我们关注的是Rails的自动完成、文件上传、分页以及上传进度管理相关的插件。让我们详细了解一下这些关键知识点: 1. **Rails 自动完成**: 自动完成是一种功能,允许用户在输入框中键入时提供预填...