`
touchmm
  • 浏览: 1037374 次
  • 性别: Icon_minigender_1
  • 来自: 北京
文章分类
社区版块
存档分类
最新评论

OpenCV 与 Touchlib

阅读更多
最近对流行的MULTITOUCH技术很感兴趣,顺便在网络上搜寻了一下,与TOUCHSCREEN(触摸屏)相比,利用计算机视觉实现的MULTITOUCH,其应用前景似乎更加广阔。利用OPENCV里面的基本功能,结合TOUCHLIB,可以做很多有意思的事情。

下面是关于TOUCHLIB的一个简单介绍,偷懒,就直接帖过来了,没有翻译。


网址: http://www.whitenoiseaudio.com/touchlib/
========================================================

What is Touchlib?

Touchlib is our library for creating multi-touch interaction surfaces. It handles tracking blobs of infrared light for you and sends your programs multitouch events, such as 'finger down', 'finger moved', and 'finger released'. It includes a configuration app and a few demos to get you started. It interaces with most major types of webcams and video capture devices. It currently works only under windows but efforts are being made to port it to other platforms.

Who Should Use Touchlib?

Touchlib only comes with simple demo applications. If you want to use touchlib you must be prepared to make your own apps. There are a few ways to do this. You can build applications in C++ and take advantage of touchlib's simple programming interface. Touchlib does not provide you with any graphical or front end abilities - it simply passes you touch events. The graphics are up to you. If you like, take a look at the example apps which use OpenGL GLUT.

If you don't want to have to compile touchlib, binaries are available.

As of the current version, touchlib now can broadcast events in the TUIO protocol (which uses OSC). This makes touchlib compatible with several other applications that support this protocol, such as vvvv, Processing, PureData, etc.. This also makes it possible to use touchlib for blob detection / tracking and something like vvvv or Processing to write appliactions. Of course the other option is to do all your blob detection and processing in vvvv or processing. It's up to you. Supporting the TUIO protocol also enables a distributed architecture where one machine can be devoted to detection and tracking and another machine can handle the application.

If you don't like touchlib and want to program your own system, the latest version of OpenCV (1.0) now has support for blob detection and tracking. This might be a good starting point.

My Mindmap

My mindmap for the touchscreen is available here. This contains info on what parts you'll need for the construction of the screen, where to find them and some very basic instructions for how to build a screen. It also includes some more links. I hope it's useful for some of the people reading this who are interested in doing their own screens. You'll need Freemind (which is coincidentally, free), in order to view it. I'm a big fan of freemind for planning out projects and getting ideas down. It's hierarchical nature allows you to organize and hide parts you are not interested in. It can also link to images, other mindmaps and web pages.

FAQ

Frequently asked questions about the construction of the screen can be found here.

Where to get the source to Touchlib, our multitouch table library:

All our source code is available on our Google Code site at http://code.google.com/p/touchlib/ . You can acces the repsitory using Subversion. If you are using windows, get TortoiseSVN. Use Tortoise to access the repository and download all the files (much easier than going thru the web interface). If you are interested in porting touchlib to linux or the mac, please email me. The system was written in such a way that it should be easy to port and does not depend heavily on any windows specific api's.

Binaries are available here.

Touchlib is written in C++ (the BlobTracking / Analysis is all written by yours truly) and has a Visual Studio 2005 Solution ready to compile. No docs are available right now and it's windows only (though it should be possible to make everything compile under other OS's with a little work). It currently depends on OpenCV, DirectShow (you'll need the Microsoft Platform SDK), VideoWrapper and the DSVideoLib. The source code includes our main library which you can link into your application to start capturing touch events. It has support for most major camera/webcam types. It also includes a basic config app which will need to be run in order to calibrate your camera, and has a couple example apps. Alternately, I've heard other people have used things like vvvv, EyesWeb, processing and Max/MSP in order to do blob tracking / processing and make applications. You can check out some of the demo apps if you want to see how it works. Pong or the config app should be fairly easy to follow. Setting up a bare minimum multitouch app should only take a dozen lines of code or less.

DL Links for dependencies:

You'll need to configure a few environment variables to get everything compiled. They are:

  • DSVL_HOME - dsvideolib root directory
  • VIDEOWRAPPER_HOME - root directory of the video wrapper library
  • OPENCV_HOME - root directory of OpenCV
  • OSCPACK_HOME - root directory of oscpack

The config app

In order to calibrate the touchlib library for your camera and projector, you'll need to run the config app. Here's how it works. You'll need to set up your computer so that the main monitor is the video projector so that the app comes up on that screen. Run the config app. Press 'b' at any time to recapture the background. Tweak the sliders until you get the desired results. The last step (rectify) should just have light coming from your finger tips (no background noise, etc). When you are satisfied press 'enter'. This will launch the app in full screen mode and you'll see a grid of points (green pluses). Now you can press 'c' to start calibrating. The current point should turn red. Press on your FTIR screen where the point is. Hopefully a press is detected (you can check by looking in the debug window). Press 'space' to calibrate the next point. You'll continue through until all points are calibrated. Note that the screen may not indicate where you are pressing. When you are all done, you can press 'ESC' to quit. All your changes (slider tweaks and calibration points) are saved to the config.xml file. Now when you run any touchlib app it will be calibrated. Note that any changes to where the projector is pointing or your webcam will require a re-calibration.

Testing

Alternate config files are available if you want to test the library using an .AVI for input (instead of the webcam). Replace the config.xml with 5point_avi.xml or 2point_avi.xml. You can edit those files to use a different AVI if you like (you can record a new one using your camera - but you may need to tweak some of the other settings in the config too).

分享到:
评论

相关推荐

    C++基于opencv与ffmpeg的视频编辑器源码.zip

    C++基于opencv与ffmpeg的视频编辑器源码,C++开发,QT界面,基于opencv与ffmpeg的视频编辑器。+配置文件,C++基于opencv与ffmpeg的视频编辑器源码,C++开发,QT界面,基于opencv与ffmpeg的视频编辑器。+配置文件C++...

    opencv图片拼接 opencv图片拼接

    opencv图片拼接opencv图片拼接opencv图片拼接opencv图片拼接opencv图片拼接opencv图片拼接opencv图片拼接opencv图片拼接opencv图片拼接opencv图片拼接opencv图片拼接opencv图片拼接opencv图片拼接opencv图片拼接...

    OpenCV.rar_C# opencv_opencv_opencv中文_opencv手册_opencv的chm手册

    openCV 中文手册 openCV 中文手册 openCV 中文手册

    OpenCV-4.7.0 lib 与 dll 文件

    在本文中,我们将深入探讨OpenCV 4.7.0版本,特别是与Visual Studio 2015(VS2015)相关的lib和dll文件,以及它们在C++开发中的应用。 首先,OpenCV 4.7.0是OpenCV库的一个较新版本,它提供了一系列改进和新功能。...

    openCV的jar包,opencv4.8.0源码下进行编译的

    opencv4.8.0人脸识别的类库,在windows 10 下 vs2022的cmaker编译。原生的opencv4.8.0是基于jdk11编译的。此版本是基于jdk1.8进行编译。可在相应的环境下运行。 opencv4.8.0人脸识别的类库,在windows 10 下 vs2022...

    opencv与halcon的Mat与HObject互转的完整测试程序

    在计算机视觉领域,OpenCV和Halcon是两个...总的来说,熟练掌握OpenCV和Halcon的`Mat`与`HObject`互转,能极大地提升在计算机视觉和机器视觉项目中的工作效率,使我们可以灵活地利用这两个库的特性,解决各种复杂问题。

    opencv OpenCvSharp 图片拼接.zip

    1. **图像读取与预处理**:首先,我们需要使用OpenCVSharp读取多张图片,可以使用`Mat.LoadImage()`或`Imread()`方法。读取后,可能需要进行一些预处理操作,例如调整尺寸、灰度化、直方图均衡化等,以便后续处理。 ...

    opencv-java480.dll windows 64 opencv-java480.jar下载

    确保`opencv-java480.dll`与你的Java应用程序和JRE(Java运行环境)的位数匹配至关重要,否则可能会出现“找不到指定的模块”之类的错误。 描述中的“opencv_java下载使用”表明了本话题主要关注如何下载和使用...

    opencv优质资源:OpenCV算法精解:基于Python与C

    《OpenCV算法精解:基于Python与C》是一本深入探讨计算机视觉库OpenCV的教材,旨在帮助读者理解和掌握OpenCV中的核心算法及其在Python和C++语言中的实现。OpenCV,全称Open Source Computer Vision Library,是一个...

    zynq移植Opencv1

    其中, `find_package( OpenCV REQUIRED )` 用于查找OpenCV库, `include_directories(${OpenCV_INCLUDE_DIRS})` 用于添加OpenCV库的头文件路径, `add_executable(Main opencv.cpp)` 用于生成可执行文件, `target_...

    自己编译的opencv4.5.1

    标签中的“opencv opencv-contrib opencvcuda”是对内容的关键词提炼,分别代表了OpenCV核心库、OpenCV贡献模块以及OpenCV与CUDA的整合。 至于压缩包子文件的文件名称“vc15_opencv4.5.1_contrib_cuda”,“vc15”...

    使用opencv与两个摄像头实现双目标定与测距

    1,它需要cvblobslib这一个opencv的扩展库来实现检测物体与给物体画框的功能,具体安装信息请见: http://dsynflo.blogspot.com/2010/02/cvblobskib-with-opencv-installation.html,当你配置好cvblobslib之后,你...

    OpenCV delphi 7-10

    从Delphi 7到Delphi 10,开发者社区一直在寻找与现代技术如OpenCV整合的方法,以便利用其强大的视觉处理功能。 在Delphi中集成OpenCV,可以让开发者构建高级的图像处理和机器学习应用程序,如人脸识别、物体识别、...

    opencv-3.2.0.zip_opencv 3.2 下载_opencv3.2_opencv3.2 下载_opencv3.2下

    opencv安装包,下载后直接进行安装就可以使用opencv了,同时可以上网搜索test代码进行测试

    OpenCvSharp 微信二维码引擎 Demo

    OpenCvSharp 微信二维码引擎 Demo VS2022+.net 4.8 +OpenCvSharp4 微信开源了其二维码的解码功能,并贡献给 OpenCV 社区。其开源的 wechat_qrcode 项目被收录到 OpenCV contrib 项目中。从 OpenCV 4.5.2 版本开始,...

    opencv-4.8.0 资源

    OpenCV支持传统的机器学习算法,如SVM(支持向量机)、决策树、随机森林等,同时也支持现代的深度学习框架,如TensorFlow和PyTorch,可以与OpenCV库无缝集成,进行端到端的视觉任务训练。 在学习OpenCV时,通常会从...

    QT配置OPENCV 环境

    QT 配置 OPENCV 环境知识点总结 在本篇文章中,我们将详细介绍如何在 QT 环境下配置 OPENCV 环境,包括系统环境变量设置、配置 CMake、编译 OpenCV、添加新生成的 bin 到 Path 环境变量、配置 Qt 中的 OpenCV 等多...

    vs2010编译的32位opencv+opencv_contrib3.4.1

    【标题】"vs2010编译的32位opencv+opencv_contrib3.4.1"涉及到的主要知识点包括OpenCV库的安装、配置、编译以及与Visual Studio 2010的集成。OpenCV(开源计算机视觉库)是一个广泛应用于图像处理和计算机视觉领域的...

    Opencv android SDK 和Opencv Android Studio Demo 百度网盘下载地址

    ### Opencv Android SDK 与 Opencv Android Studio Demo 相关知识点 #### 一、OpenCV Android SDK 概述 OpenCV(开源计算机视觉库)是一款跨平台的计算机视觉和机器学习软件库,广泛应用于图像处理、视频分析等...

    OpenCV Linux全版本下载(OpenCV1.0.0~OpenCV4.4.0)

    OpenCV全版本下载,从OpenCV1.0.0~OpenCV4.4.0,所有资源从OpenCV官网下载。仅Linux版本适用。

Global site tag (gtag.js) - Google Analytics