`
maricoliu
  • 浏览: 55822 次
  • 性别: Icon_minigender_1
  • 来自: 深圳
社区版块
存档分类
最新评论

iOS Automated Tests with UIAutomation

    博客分类:
  • IOS
阅读更多

 http://blog.manbolo.com/2012/04/08/ios-automated-tests-with-uiautomation

iOS Automated Tests with UIAutomation

April 8, 2012  tweet

Quick introduction

Automated tests are very useful to test your app “while you sleep”. It enables you to quickly track regressions and performance issues, and also develop new features without worrying to break your app.

Since iOS 4.0, Apple has released a framework called UIAutomation, which can be used to perform automated tests on real devices and on the iPhone Simulator. The documentation on UIAutomation is quite small and there is not a lot of resources on the web. This tutorial will show you how to integrate UIAutomation in your workflow.

The best pointers to begin are the Apple documentation on UIAutomation, a very good quick tutorial in Apple Instruments documentation and, of course, the slides/videos of WWDC 2010 - Session 306 - Automating User Interface Testing with Instruments. You’ll need a free developper account to access this ressources.

Another framework to be mention is OCUnit, which is included in Xcode, and can be used to add unit tests to your app.

  1. Your first UIAutomation script
  2. Dealing with UIAElement and Accessibility
  3. Tips to simplify your life
  4. Advanced interactions
  5. The end

1. Your first UIAutomation script

UIAutomation functional tests are written in Javascript. There is a strong relation between UIAutomation and accessibility, so you will use the accessibility labels and values to simulate and check the results of simulated UI interaction.

Let’s go, and write our first test!

Using iOS simulator

  1. Download the companion project TestAutomation.xcodeproj, and open it. The project is a simple tab bar application with 2 tabs.
  2. Insure that the following scheme is selected ’TestAutomation > iPhone 5.0 Simulator’ (Maybe you’ve already switched to 5.1 so it could be also iPhone 5.1)
  3. Launch Instruments (Product > Profile) or ⌘I.
  4. In iOS Simulator, select the Automation template, then ’Profile’
  5. Instruments is launching, and start recording immediately. Stop the record, (red button or ⌘R).
  6. In the Scripts window , click ’Add > Create’ to create a new script
  7. In the Script window editor, tap the following code

    var target = UIATarget.localTarget();
    var app = target.frontMostApp();
    var window = app.mainWindow();
    target.logElementTree();
    

  8. Re-launch the script ⌘R (you don’t need to save). The script runs and you can stop it after logs appear.

Voilà! You’ve written your first UIAutomation test!

Using an iOS device

You can also run this test with a real device, instead of the simulator. Automated tests are only available on devices that support multitask: iPhone 3GS, iPad, running iOS > 4.0. UIAutomation is unfortunately not available on iPhone 3G, whatever is the OS version.

To run the test on a device:

  1. Connect your iPhone to USB
  2. Select the scheme ’TestAutomation > iOS Device’
  3. Check that the Release configuration is associated with a Developper profile (and not an Ad-Hoc Distribution profile). By default, profiling is done in Release (there is no reason to profile an app in Debug!)
  4. Profile the app (⌘I)
  5. Follow the same steps than previously on the Simulator.

2. Dealing with UIAElement and Accessibility

UIAElement hierarchy

There is a strong relationship between Accessibility and UIAutomation: if a control is accessible with Accessibility, you will be able to set/get value on it, produce action etc… A control that is not “visible” to Accessibility won’t be accessible through automation.

You can allow accessibility/automation on a control whether using Interface Builder, or by setting programmatically the property isAccessibilityElement. You have to pay some attention when setting accessibility to container view (i.e. a view that contains other UIKit elements). Enable accessibility to an entire view can “hide” its subviews from accessibility/automation. For instance, in the project, the view outlet of the controller shouldn’t be accessible, otherwise the sub controls won’t be accessible. If you have any problem,logElementTree is your friend: it dumps all current visible elements that can be accessed.

Each UIKit control that can be accessed can be represented by a Javascript Object, UIAElement. UIAElement has several properties, name, value, elements, parent. Your main window contains a lot of controls, which define a UIKit hierachy. To this UIKit hierarchy, corresponds an UIAElement hierachy. For instance, by calling logElementTree in the previous test, we have the following tree:

+- UIATarget: name:iPhone Simulator rect:{{0,0},{320,480}}
|  +- UIAApplication: name:TestAutomation rect:{{0,20},{320,460}}
|  |  +- UIAWindow: rect:{{0,0},{320,480}}
|  |  |  +- UIAStaticText: name:First View value:First View rect:{{54,52},{212,43}}
|  |  |  +- UIATextField: name:User Text value:Tap Some Text Here ! rect:{{20,179},{280,31}}
|  |  |  +- UIAStaticText: name:The text is: value:The text is: rect:{{20,231},{112,21}}
|  |  |  +- UIAStaticText: value: rect:{{145,231},{155,21}}
|  |  |  +- UIATabBar: rect:{{0,431},{320,49}}
|  |  |  |  +- UIAImage: rect:{{0,431},{320,49}}
|  |  |  |  +- UIAButton: name:First value:1 rect:{{2,432},{156,48}}
|  |  |  |  +- UIAButton: name:Second rect:{{162,432},{156,48}}

To access the text field, you can just write:

var textField = UIATarget.localTarget().frontMostApp().mainWindow().textFields()[0];

You can choose to access elements by a 0-based index or by element name. For instance, the previous text field could also be accessed like this:

var textField = UIATarget.localTarget().frontMostApp().mainWindow().textFields()["User Text"];

The later version is clearer and should be preferred. You can set the name of a UIAElement either in Interface Builder:

or programmaticaly:

myTextField.accessibilityEnabled = YES;
myTextField.accessibilityLabel = @"User Text";

You can see now that accessibility properties are used by UIAutomation to target the different controls. That’s very clever, because 1) there is only one framework to learn; 2) by writing your automated tests, you’re also going to insure that your app is accessible! So, each UIAElement can access its children by calling the following functions: buttons(), images(), scrollViews(),textFields(), webViews(), segmentedControls(), sliders(), staticTexts(), switches(), tabBar(),tableViews(), textViews(), toolbar(), toolbars() etc… To access the first tab in the tab bar, you can write:

var tabBar = UIATarget.localTarget().frontMostApp().tabBar();
var tabButton = tabBar.buttons()["First"];  

The UIAElement hierarchy is really important and you’re going to deal with it constantly. And remember, you can dump the hierarchy each time in your script by calling logElementTree on UIAApplication:

UIATarget.localTarget().frontMostApp().logElementTree();

In the simulator, you can also activate the Accessibility Inspector. Launch the simulator, go to ’Settings > General > Accessibility > Accessibility Inspector’ and set it to ’On’.

This little rainbow box is the Accessibility Inspector. When collapsed, Accessibility is off, and when expanded Accessibility is on. To activate/desactivate Accessibility, you just have to click on the arrow button. Now, go to our test app, launch it, and activate the Inspector.

Then, tap on the text field and check the name and value properties of the associated UIAElement (and also the NSObject accessibilityLabel and accessibilityValue equivalent properties). This Inspector will help you to debug and write your scripts.

Simulate user interactions

Let’s go further and simulate user interaction. To tap a button, you simply call tap() on this element:

var tabBar = UIATarget.localTarget().frontMostApp().tabBar();
var tabButton = tabBar.buttons()["First"];  

// Tap the tab bar !
tabButton.tap();

You can also call doubleTap(), twoFingerTap() on UIAButtons. If you don’t want to target an element, but only interact on the screen at a specified coordinate screen, you can use:

  • Taps:

    UIATarget.localTarget().tap({x:100, y:200});
    UIATarget.localTarget().doubleTap({x:100, y:200});
    UIATarget.localTarget().twoFingerTap({x:100, y:200});
    
  • Pinches:

    UIATarget.localTarget().pinchOpenFromToForDuration({x:20, y:200},{x:300, y:200},2);
    UIATarget.localTarget().pinchCloseFromToForDuration({x:20, y:200}, {x:300, y:200},2);   
    
  • Drag and Flick:

    UIATarget.localTarget().dragFromToForDuration({x:160, y:200},{x:160,y:400},1);
    UIATarget.localTarget().flickFromTo({x:160, y:200},{x:160, y:400});
    

When you specify a duration, only a certain range is accepted i.e.: for drag duration, value must be greater than or equal to 0.5s or less than 60s.

Now, let’s put this in practice:

  1. Stop (⌘R) Instruments
  2. In the Scripts window, remove the current script
  3. Click on ’Add > Import’ and select TestAutomation/TestUI/Test-1.js
  4. Click on Record (⌘R) and watch what’s happens…

The script is:

var testName = "Test 1";
var target = UIATarget.localTarget();
var app = target.frontMostApp();
var window = app.mainWindow();

UIALogger.logStart( testName );
app.logElementTree();

//-- select the elements
UIALogger.logMessage( "Select the first tab" );
var tabBar = app.tabBar();
var selectedTabName = tabBar.selectedButton().name();
if (selectedTabName != "First") {
    tabBar.buttons()["First"].tap();
}

//-- tap on the text fiels
UIALogger.logMessage( "Tap on the text field now" );
var recipeName = "Unusually Long Name for a Recipe";
window.textFields()[0].setValue(recipeName);

target.delay( 2 );

//-- tap on the text fiels
UIALogger.logMessage( "Dismiss the keyboard" );
app.logElementTree();
app.keyboard().buttons()["return"].tap();

var textValue = window.staticTexts()["RecipeName"].value();
if (textValue === recipeName){
    UIALogger.logPass( testName ); 
}
else{
    UIALogger.logFail( testName ); 
}

This script launches the app, selects the first tab if it is not selected, sets the value of the text field to ’Unusually Long Name for a Recipe’ and dismisses the keyboard. Some new functions to notice: delay(Number timeInterval) on UIATarget allows you to introduce some delay between interactions, logMessage( String message) on UIALogger can be used to log message on the test output and logPass(String message) on UIALogger indicates that your script has completed successfully.
You can also see how to a access the different buttons on the keyboard and tap on itapp.keyboard().buttons()["return"].tap();


3. Tips to simplify your life

Introducing Tune-up

Now, you’ve a basic idea of how you could write some tests. You will notice soon that there is a lot of redundancy and glue code in your tests, and you’ll often rewrite code like that:

var target = UIATarget.localTarget();
var app = target.frontMostApp();
var window = app.mainWindow();

That’s why we’re going to use a small Javascript library that eases writing UIAutomation tests. Go to https://github.com/alexvollmer/tuneup_js, get the library and copy the tuneup folder aside your tests folder. Now, we can rewrite Test1.js using Tune-Up

#import "tuneup/tuneup.js"

    test("Test 1", function(target, app) {

    var window = app.mainWindow();
    app.logElementTree();

    //-- select the elements
    UIALogger.logMessage( "Select the first tab" );
    var tabBar = app.tabBar();
    var selectedTabName = tabBar.selectedButton().name();
    if (selectedTabName != "First") {
        tabBar.buttons()["First"].tap();
    }

    //-- tap on the text fiels
    UIALogger.logMessage( "Tap on the text field now" );

    var recipeName = "Unusually Long Name for a Recipe";
    window.textFields()[0].setValue(recipeName);

    target.delay( 2 );

    //-- tap on the text fiels
    UIALogger.logMessage( "Dismiss the keyboard" );
    app.logElementTree();
    app.keyboard().buttons()["return"].tap();

    var textValue = window.staticTexts()["RecipeName"].value();

    assertEquals(recipeName, textValue);
});

Tune-Up avoids you to write the same boilerplate code, plus gives you some extra like various assertions: assertTrue(expression, message), assertMatch(regExp, expression, message),assertEquals(expected, received, message), assertFalse(expression, message), assertNull(thingie, message), assertNotNull(thingie, message)… You can extend the library very easily: for instance, you can add a logDevice method on UIATarget object by adding this function in uiautomation-ext.js:

extend(UIATarget.prototype, {
   logDevice: function(){
   UIALogger.logMessage("Dump Device:");
   UIALogger.logMessage("  model: " + UIATarget.localTarget().model());
   UIALogger.logMessage("  rect: " + JSON.stringify(UIATarget.localTarget().rect()));
   UIALogger.logMessage("  name: "+ UIATarget.localTarget().name());
   UIALogger.logMessage("  systemName: "+ UIATarget.localTarget().systemName());
   UIALogger.logMessage("  systemVersion: "+ UIATarget.localTarget().systemVersion());

   }
});

Then, calling target.logDevice() you should see:

Dump Device:
  model: iPhone Simulator
  rect: {"origin":{"x":0,"y":0},"size":{"width":320,"height":480}}
  name: iPhone Simulator

Import external scripts

You can also see how to reference one script from another, with #import directive. So, creating multiples tests and chaining them can be done by importing them in one single file and call:

#import "Test1.js"
#import "Test2.js"
#import "Test3.js"
#import "Test4.js"
#import "Test5.js"

By the power of the command line

If you want to automate your scripts, you can launch them from the command line. In fact, I recommend to use this option, instead of using the Instruments graphical user interface. Instruments’s UI is slow, and tests keep running even when you’ve reached the end of them. Launching UIAutomation tests on command line is fast, and your scripts will stop at the end of the test.

To launch a script, you will need your UDID and type on a terminal:

instruments -w your_ios_udid -t /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/Library/Instruments/PlugIns/AutomationInstrument.bundle/Contents/Resources/Automation.tracetemplate name_of_your_app -e UIASCRIPT absolute_path_to_the_test_file 

For instance, in my case, the line looks like:

instruments -w a2de620d4fc33e91f1f2f8a8cb0841d2xxxxxxxx -t /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/Library/Instruments/PlugIns/AutomationInstrument.bundle/Contents/Resources/Automation.tracetemplate TestAutomation -e UIASCRIPT /Users/jc/Documents/Dev/TestAutomation/TestAutomation/TestUI/Test-2.js 

If you are using a version of Xcode inferior to 4.3, you will need to type:

instruments -w your_ios_device_udid -t /Developer/Platforms/iPhoneOS.platform/Developer/Library/Instruments/PlugIns/AutomationInstrument.bundle/Contents/Resources/Automation.tracetemplate TestAutomation -e UIASCRIPT /Users/jc/Documents/Dev/TestAutomation/TestAutomation/TestUI/Test-2.js 

A small catch, don’t forget to disable the pass code on your device, otherwise you will see this trace: remote exception encountered : ’device locked : Failed to launch process with bundle identifier ’com.manbolo.testautomation’. Yes, UIAutomation doesn’t know yet your password!

The command line works also with the Simulator. You will need to know the absolute path of your app in the simulator file system. The simulator ’simulates’ the device file system in the following folder ~/Library/Application Support/iPhone Simulator/5.1/. Under this directory, you will find the Applications directory that contains a sandbox for each of the apps installed in the simulator. Just identify the repository of the TestAutomation app and type in the simulator:

instruments -t /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/Library/Instruments/PlugIns/AutomationInstrument.bundle/Contents/Resources/Automation.tracetemplate "/Users/jc/Library/Application Support/iPhone Simulator/5.1/Applications/C28DDC1B-810E-43BD-A0E7-C16A680D8E15/TestAutomation.app" -e UIASCRIPT /Users/jc/Documents/Dev/TestAutomation/TestAutomation/TestUI/Test-2.js

A final word on the command line. If you don’t precise an output file, the log result will be put in the folder in which you’ve typed the command. You can use -e UIARESULTSPATH results_path to redirect the output of the scripts.

I’ve not succeeded to launch multiple scripts in parallel with the command line. Use the whole nights to chain and launch your scripts so you will really test your app “while you sleep”.

Interactively record interaction

Instead of typing your script, you can record the interaction directly on the device or in the simulator, to replay them later. Do to this:

  1. Launch Instruments (⌘I)
  2. Create a new script
  3. Select the Script editor
  4. In the bottom of the script editor, see that red button ?Press-it!
  5. Now, you can play with your app; you will see the captured interactions appearing in the script window (even rotation event). Press the square button to stop recording.

“When things don’t work, add UIATarget.delay(1);”

While writing your script, you will play with timing, animations and so on. UIAutomation has various functions to get elements and wait for them even if they’re not displayed but the best advice is from this extra presentation:

When things don’t work, add UIATarget.delay(1);!


4. Advanced interactions

Handling unexpected and expected alerts

Handling alert in automated tests has always been difficult: you’ve carefully written your scripts, launch your test suite just before going to bed, and, in the morning, you discover that all your tests has been ruined because your iPhone has received an unexpected text message that has blocked the tests. Well, UIAutomation helps you to deal with that.

By adding this code in your script,

UIATarget.onAlert = function onAlert(alert){
    var title = alert.name();
    UIALogger.logWarning("Alert with title ’" + title + "’ encountered!");
    return false; // use default handler
}

and returning false, you ask UIAutomation to automatically dismiss any UIAlertView, so alerts won’t interfere with your tests. Your scripts will run as if there has never been any alert. But alerts can be part of your app and tested workflow so, in some case, you don’t wan’t to automatically dismiss it. To do so, you can test against the title of the alert, tap some buttons and return true. By returning true, you indicate UIAutomation that this alert must be considered as a part of your test and treated accordantly.

For instance, if you want to test the ’Add Something’ alert view by taping on an ’Add’ button, you could write:

UIATarget.onAlert = function onAlert(alert) {
    var title = alert.name();
    UIALogger.logWarning("Alert with title ’" + title + "’ encountered!");
    if (title == "Add Something") {
        alert.buttons()["Add"].tap();
        return true; // bypass default handler
    }
    return false; // use default handler
 }

Easy Baby!

Multitasking

Testing multitasking in your app is also very simple: let’s say you want to test that crazy background process you launch each time the app resumes from background and enter in - (void)applicationWillEnterForeground:(UIApplication *)application selector, you can send the app in background, wait for for 10 seconds, and resume it by calling:

UIATarget.localTarget().deactivateAppForDuration(10);

deactivateAppForDuration(duration) will pause the script, simulate the user taps the home button, (and send the app in background), wait, resume the app and resume the test script for you, in one line of code!.

Orientation

Finally, you can simulate the rotation of your iPhone. Again, pretty straightforward and easy:

var target = UIATarget.localTarget();
var app = target.frontMostApp();

// set landscape left
target.setDeviceOrientation(UIA_DEVICE_ORIENTATION_LANDSCAPELEFT);
UIALogger.logMessage("Current orientation is " + app.interfaceOrientation());

// portrait
target.setDeviceOrientation(UIA_DEVICE_ORIENTATION_PORTRAIT);
UIALogger.logMessage("Current orientation is " + app.interfaceOrientation()); 

5. The end

Useful links

This was a pretty long post but I hope that you see the power of UIAutomation and the potential burst in quality that your app can gained. There is not a lot of documentation on UIAutomation, but I’ve listed a bunch of links that may help you.

And, of course

You’ll need a free developper account to access this ressources.

A video

To conclude this trip to UIAutomation, I don’t resist to show you how we use UIAutomation with Meon in a little video. We use various test, and in this video, we test that the player can play from level 0 to level 120. Heeeelp me, my iPhone is alive!

分享到:
评论

相关推荐

    EarlGrey:iOS UI自动化测试框架

    EarlGrey是本机iOS UI自动化测试框架,使您能够编写清晰,简洁的测试。 使用EarlGrey框架,您可以访问增强的同步功能。 EarlGrey自动与UI,网络请求和各种队列进行同步; 但仍然可以根据需要手动实施自定义的计时。...

    WinCC嵌入式Excel报表系统:实现高效自动化报表生成与数据分析

    内容概要:本文详细介绍了WinCC嵌入式Excel报表系统的功能和优势。该系统利用VBS脚本和Excel模板相结合的方式,能够直接从WinCC变量归档库读取数据并生成高质量的报表。文中展示了多种实用的技术细节,如数据质量校验、不同数据处理模式(实时值、平均值、累计值)、模板机制、报表控件集成以及条件格式的应用。此外,还提供了具体的代码示例来解释如何实现这些功能。 适用人群:适用于从事工业控制系统开发、维护的技术人员,尤其是那些需要频繁处理报表任务的人群。 使用场景及目标:主要应用于需要快速生成各类生产数据报表的场合,如日报表、月报表等。通过该系统可以极大提高工作效率,减少人工干预,确保数据准确性,并且降低了学习成本和技术门槛。 其他说明:该系统不仅支持多版本兼容,而且移植性强,能够在不同的WinCC环境中轻松部署。同时,它还允许用户通过简单的Excel模板修改来自定义报表格式,满足多样化的业务需求。

    【信息安全领域】实战项目:渗透测试与漏洞利用技术详解及权限提升方案设计介绍了信息安全领域的一个

    内容概要:本文档《信息安全领域实战项目.docx》详细介绍了网络安全渗透测试的具体步骤和实战案例。文档从信息收集开始,逐步深入到漏洞验证、漏洞攻击和权限提升等环节。首先,通过使用工具如FOFA进行资产收集,识别出目标服务器开放的多个端口,并进一步通过后台扫描工具发现潜在的敏感文件。接着,针对发现的Grafana任意文件读取漏洞(CVE-2021-43798)和ActiveMQ任意文件上传漏洞(CVE-2016-3088),分别进行了详细的漏洞验证与攻击演示,包括具体的payload构造、利用方式及攻击效果展示。最后,探讨了CVE-2021-4034 Linux polkit提权漏洞的应用场景及其利用方法。此外,文档还涵盖了政务智慧信息系统安全建设项目的背景、目标、建设内容以及相关的人才需求分析。 适合人群:具备一定网络安全基础,尤其是对渗透测试感兴趣的初学者或中级技术人员。 使用场景及目标:①帮助读者理解并掌握从信息收集到漏洞利用的完整渗透测试流程;②提供实际操作案例,使读者能够学习如何识别和利用常见的Web应用漏洞;③培养读者在面对真实世界的安全问题时,能够运用所学知识进行有效的分析和解决。 阅读建议:由于文档内容涉及较多的技术细节和实战操作,建议读者在阅读过程中结合实际环境进行练习,并参考官方文档或其他权威资料加深理解。同时,注意合法合规地使用所学技能,确保所有活动都在授权范围内进行。

    电动汽车动力系统仿真的关键技术:双向DCDC变换器与电机控制策略

    内容概要:本文详细介绍了电动汽车动力系统的仿真技术,涵盖双向DCDC变换器的能量反馈机制和支持异步电机与永磁同步电机的仿真。文中展示了多个关键控制策略,如电流环控制、最大转矩电流比(MTPA)控制、弱磁控制以及基于事件触发的协调控制。通过MATLAB、Python和C等多种编程语言的具体代码实例,解释了如何实现高效的能量管理和电机控制。此外,文章还讨论了仿真过程中遇到的实际问题及其解决方案,如电压过冲、电流振荡和系统耦合等问题。 适合人群:从事电动汽车研究与开发的技术人员、高校相关专业师生、对电动汽车动力系统感兴趣的工程师。 使用场景及目标:适用于电动汽车动力系统的设计与优化,帮助工程师理解和掌握双向DCDC变换器的工作原理及电机控制策略,提高能量利用效率,确保系统稳定性。 其他说明:文章不仅提供了详细的理论和技术背景,还分享了许多实践经验,有助于读者更好地将理论应用于实际项目中。

    石油工程中基于深度学习的FracPredictor裂缝建模与压裂模拟技术解析

    内容概要:本文详细介绍了FracPredictor这一基于深度学习的裂缝预测工具及其应用。首先探讨了数据处理部分,如利用滑窗处理时序+空间特征混合体的方法,以及如何将岩石力学数据转换为适合神经网络的格式。接着深入剖析了模型架构,包括时空双流网络、注意力机制用于跨模态融合、HybridResBlock自定义层等创新设计。此外,文章还分享了训练技巧,如渐进式学习率衰减、CosineAnnealingWarmRestarts调度器的应用。对于可视化方面,则推荐使用PyVista进行三维渲染,以直观展示裂缝扩展过程。文中还提到了一些实用的小技巧,如数据预处理中的自动标准化、配置文件参数调整、以及针对特定地质条件的优化措施。最后,通过多个实际案例展示了FracPredictor在提高预测准确性、降低计算成本方面的优势。 适合人群:从事石油工程、地质勘探领域的研究人员和技术人员,尤其是对裂缝建模与压裂模拟感兴趣的从业者。 使用场景及目标:适用于需要高效、精准地进行裂缝预测和压裂模拟的工程项目。主要目标是帮助用户掌握FracPredictor的工作原理,学会从数据准备到结果可视化的完整流程,从而优化压裂方案,减少工程风险。 其他说明:文章不仅提供了详细的代码示例,还附带了丰富的实战经验和注意事项,有助于读者更好地理解和应用这项新技术。

    multiSIM视频教程-电路创建和基本功能测试.zip

    multisim

    基于ssm的房产中介信息管理系统(源码+数据库)135

    基于ssm的房产中介信息管理系统:前端 html、jquery、layui,后端 maven、springmvc、spring、mybatis;角色分为管理员、员工;集成卖家信息,买家信息,房屋管理等功能于一体的系统。 ## 功能介绍 - 用户管理:用户信息的增删改查,按用户名搜素 - 通知公告:公告信息的增删改查,关键词搜索 - 卖家信息:卖家信息的增删改查,关键词搜索 - 买家信息:买家信息的增删改查,关键词搜索 - 房屋管理:房屋信息的增删改查,按小区名称搜索,房屋图片上传 - 房屋搜索:房屋列表查询,查询满足条件的房屋信息 ## 环境 - <b>IntelliJ IDEA 2021.3</b> - <b>Mysql 5.7.26</b> - <b>Tomcat 7.0.73</b> - <b>JDK 1.8</b>

    基于ssm的学生资助管理系统(源码+数据库)147

    基于ssm的学生资助管理系统:前端 jsp、jquery,后端 springmvc、spring、mybatis;角色分为:管理员、学生;集成OA流程管理、贫困生认定、奖学金管理等功能于一体的系统。 ## 功能介绍 - 系统管理:权限管理,菜单管理,在线管理,日志管理,系统用户管理 - OA流程管理:工作流程(模型管理,流程管理,运行中流程,历史的流程),任务管理,消息管理 - 贫困生认定管理:贫困生认定申请,申请材料审核,上报院校审批 - 国家助学金管理:国家助学金申请,申请材料审核,上报院校审批 - 勤工俭学管理:学生基本信息,勤工岗位信息,学生勤工信息 - 公告管理:公告信息的增删改查 ## 环境 - <b>IntelliJ IDEA 2021.3</b> - <b>Mysql 5.7.26</b> - <b>Tomcat 7.0.73</b> - <b>JDK 1.8</b>

    芬朗A12U电脑调音软件是专为音响爱好者和专业人士设计的一款强大工具,喜欢的话,直接下载吧

    芬朗A12U电脑调音软件是专为音响爱好者和专业人士设计的一款强大工具,喜欢的话,直接下载吧

    基于ssm的学生成绩管理系统(源码+数据库)206

    基于ssm的学生成绩管理系统:前端 jsp、jquery、bootstrap,后端 maven、springmvc、spring、mybatis;角色分为管理员、学生;集成用户管理,成绩管理,公告管理等功能于一体的系统。 ## 功能介绍 - 基本功能:登录,注册,退出,密码修改 - 用户管理:用户信息的增删改查,用户也可以由学生自行注册,管理员可以修改和删除用户信息,学生只能操作自己的信息 - 成绩管理:管理员对成绩信息的增删改查,学生只能查询 - 公告管理:管理员对公告信息的增删改查,学生只能查看 ## 环境 - <b>IntelliJ IDEA 2021.3</b> - <b>Mysql 5.7.26</b> - <b>Tomcat 7.0.73</b> - <b>JDK 1.8</b>

    深度学习基于PyTorch的快速轻量级通道注意力机制(FFCM):EfficientNet模型改进与应用

    内容概要:本文介绍了一种快速轻量级的通道注意力机制(FFCM),并通过修改MBConv模块将其应用于EfficientNet模型中。FFCM由局部通道交互和全局通道交互两部分组成。局部通道交互通过深度可分离卷积实现,全局通道交互则采用全局平均池化和两个卷积层来减少通道维度并恢复。为了将FFCM集成到MBConv模块中,定义了`add_ffcm_to_mbconv`函数,该函数在MBConv模块的前向传播过程中插入FFCM模块。最后,通过`create_model`函数创建了一个带有FFCM的EfficientNet模型,并修改了分类头以适应不同的类别数量。代码还展示了如何加载预训练权重以及模型的测试输出。; 适合人群:对深度学习有一定了解,特别是熟悉卷积神经网络和注意力机制的研究人员或工程师。; 使用场景及目标:①理解快速轻量级通道注意力机制(FFCM)的设计思路及其在卷积神经网络中的应用;②掌握如何修改现有的卷

    社交媒体-短视频发布-用户互动-测试平台-1744736861.zip

    社交媒体_短视频发布_用户互动_测试平台_1744736861.zip

    油气田开发中CO2驱水的二阶PDE两相流模拟及COMSOL应用

    内容概要:本文详细介绍了使用COMSOL进行CO2驱水的二阶偏微分方程(PDE)两相流模拟的方法和技术细节。主要内容涵盖核心控制方程(如达西定律和质量守恒方程)、相对渗透率模型的选择(如Corey模型和Brooks-Corey模型)、边界条件的设置、求解器配置以及后处理技巧。文中强调了超负压驱替现象的模拟及其重要性,并提供了多个具体的操作实例和优化建议,确保模型的稳定性和准确性。 适合人群:从事油气田开发、碳捕集与利用封存(CCUS)研究的专业技术人员,以及对多物理场耦合仿真感兴趣的科研人员。 使用场景及目标:适用于油气田开发过程中CO2驱水模拟的研究项目,旨在提高采收率并评估碳封存效果。主要目标是通过精确的数学模型和高效的数值方法,模拟CO2和水在孔隙介质中的动态交互过程,从而优化注入策略和预测驱替效果。 其他说明:文中提供的MATLAB代码片段和COMSOL操作指南有助于读者快速上手实践。同时,文章还讨论了常见的数值问题及其解决方案,如数值震荡、模型发散等,帮助读者规避常见错误并提高模拟的成功率。

    基于COMSOL的地应力平衡与隧道开挖及衬砌支护仿真技术详解

    内容概要:本文详细介绍了使用COMSOL进行隧道开挖及衬砌支护仿真的全过程,涵盖地应力平衡、开挖模拟、衬砌支护等关键技术环节。首先强调了地应力平衡的重要性,包括重力补偿、初始应力场设置等。接着阐述了开挖模拟的具体方法,如材料切换、几何非线性选项的应用。然后讲解了衬砌支护的实施细节,涉及壳接口创建、接触条件设置等。最后讨论了分步求解策略以及常见问题的解决方案,如应力奇点处理、网格优化等。 适合人群:从事岩土工程仿真、隧道工程施工及相关领域的工程师和技术人员。 使用场景及目标:适用于需要进行隧道开挖及支护仿真的工程项目,旨在帮助用户掌握COMSOL软件在此类应用中的具体操作方法,提高仿真精度和效率。 其他说明:文中提供了大量实用技巧和注意事项,如参数设置、代码片段等,有助于读者更好地理解和应用相关技术。同时提醒读者关注实际项目的具体情况,灵活调整参数以获得最优结果。

    C++与OpenCV实现高效工业检测模板匹配框架:支持多形态ROI与并行加速

    内容概要:本文详细介绍了作者使用C++和OpenCV构建的一个高效的模板匹配框架,适用于工业检测场景。该框架支持创建带有旋转角度的矩形ROI、圆形ROI以及环形ROI,并提供了手绘屏蔽功能来提高模板制作的灵活性。为了加快匹配速度,采用了多尺度金字塔加速、并行计算和亚像素级定位优化等技术手段。此外,文中还分享了一些实际应用案例和技术难点解决方案,如硬币分类计数、PCB板元件计数等。 适合人群:有一定C++和OpenCV基础,从事机器视觉或工业自动化相关领域的工程师。 使用场景及目标:①用于工业生产线上的物体检测与分类;②提高模板匹配的速度和准确性;③解决复杂背景下目标识别的问题。 其他说明:文中不仅展示了具体的代码实现,还讨论了许多实践经验,包括性能优化技巧、常见错误及其规避方法等。对于希望深入了解模板匹配算法并在实际项目中应用的人来说非常有价值。

    分享一个快速执行脚本的工具OneClicker 最新版

    工作的时候,有很多琐碎的事情需要重复的做 比如打开某个文件,打开某个网站,打开某个软件 这个时候可以写个自动脚本,把机械琐碎的事情交给脚本处理 但是脚本一多,不好管理,而且要选择哪个脚本也是个麻烦的事情 所以写了OneClicker,快捷键呼出脚本的管理界面,脚本也绑定快捷键 在任何一个地方,只要按两三个按键,就可以执行某个脚本,处理掉琐碎事情 使用的流程 配置 1、function文件夹加上批处理脚本 2、运行软件OneClicker.exe 3、配置脚本的快捷键 使用 1、按Ctrl + K,弹出界面 2、再按脚本的快捷键,注意输入法是要英文的 例子:打开百度:先按Ctrl + K,再按B 关闭 1、点解界面右上角的关闭按钮,不会退出软件,只是最小化到托盘 2、想要退出软件,可以在任务栏或者托盘右键关闭

    基于MATLAB的心音信号自适应滤波降噪:LMS、NLMS、变步长LMS及RLS算法的应用

    内容概要:本文详细介绍了基于MATLAB实现的多种自适应滤波算法用于心音信号降噪的方法和技术。首先阐述了LMS(最小均方)算法的基本原理及其简单实现,接着讨论了归一化LMS(NLMS)、变步长LMS两种改进版本的特点和优势,并提供了相应的Matlab代码示例。最后深入探讨了RLS(递归最小二乘法)算法,在理论层面解释了其为何能在降噪效果上超越前两者,并附上了完整的代码实现。文中还给出了具体的实验数据对比,展示了各算法在不同条件下的性能差异。 适用人群:从事生物医学信号处理的研究人员、工程师以及对自适应滤波感兴趣的高校师生。 使用场景及目标:适用于需要对心音信号进行高质量降噪处理的实际应用场景,如临床诊断辅助设备的研发;同时也可以作为教学材料帮助学生理解自适应滤波的工作机制。 其他说明:文中不仅提供了详细的算法解析,还包括了许多实用的经验分享和技术细节提示,有助于读者更好地掌握并应用于实践中。此外,作者还强调了一些常见的误区和注意事项,提醒使用者避免不必要的错误。

    基于ssm+jsp的虚拟商品管理系统(源码+数据库)241

    基于ssm+jsp的虚拟商品管理系统:前端 jsp、jquery,后端 maven、springmvc、spring、mybatis;角色分为管理员、用户;集成促销商品、商品购买、购物车、订单查询等功能于一体的系统。 ## 环境 - <b>IntelliJ IDEA 2021.3</b> - <b>Mysql 5.7.26</b> - <b>Tomcat 7.0.73</b> - <b>JDK 1.8</b>

    深度学习基于SwinTransformer的SE模块增强:图像分类模型结构修改与前向传播测试了文档的主要内容

    内容概要:本文介绍了如何将SE(Squeeze-and-Excitation)模块集成到Swin Transformer模型中。首先定义了一个适用于Swin Transformer的SEModule类,该类能够接收形状为(B, L, C)的输入张量并进行通道注意力机制的计算。接着通过add_se_to_swin函数遍历Swin Transformer的各层,在每个SwinTransformerBlock后的MLP模块后插入SE模块。最后提供了get_model函数用于获取预训练或未预训练的Swin Transformer模型,并替换原有的分类头以适应指定数量的类别。通过测试代码验证了模型结构修改后的前向传播正确性; 适合人群:对深度学习有一定了解,特别是熟悉PyTorch框架和Swin Transformer模型的研究人员或开发者; 使用场景及目标:①需要增强Swin Transformer模型特征表示能力时;②希望研究SE模块与Transformer架构结合效果时; 阅读建议:由于涉及到具体的代码实现,建议读者具备一定的PyTorch编程经验,同时最好对Swin Transformer和SE模块有初步的认识,在阅读时可以结合官方文档理解各个组件的功能。

    光学领域中超透镜逆向设计、RCWA算法及深度学习的应用与复现代码

    内容概要:本文详细介绍了超透镜设计领域的最新进展,涵盖了逆向设计、RCWA算法和深度学习三种关键技术。首先,逆向设计通过设定所需的光学功能,如聚焦和平行光成像,反推出超透镜的具体结构和材料分布。其次,RCWA算法用于精确描述光与周期性结构超表面的相互作用,提供了高效的电磁场响应计算方法。最后,深度学习利用卷积神经网络(CNN)等模型,通过大量数据训练,实现了超透镜结构与光学性能之间的复杂关系的学习。文中还提供了完整的复现代码和环境配置,便于读者实践。 适合人群:对超透镜设计感兴趣的科研人员、光学工程师及从事相关领域研究的学生。 使用场景及目标:适用于希望深入了解超透镜设计原理并掌握具体实现方法的研究人员。目标是通过逆向设计、RCWA算法和深度学习,提高超透镜设计的效率和精度。 其他说明:文中提供的复现代码和环境配置可以帮助读者快速搭建实验环境,验证和改进超透镜设计方法。

Global site tag (gtag.js) - Google Analytics