Introduction
After reading an article called "How Google Earth Works" on the great site HowStuffWorks.com, it became apparent that the article was more of a "how cool it is" and "here’s how to use it" than a "how Google Earth [really] works."
So I thought there might be some interest, and despite some valid intellectual property concerns, here we are, explaining how at least part of Google Earth works.
Keep in mind, those IP issues are real. Keyhole (now known as Google Earth) was attacked once already with claims that they copied someone else’s inferior (IMO) technology. The suit was completelydismissed by a judge, but only after many years of pain. Still, it highlights one problem of even talking about this stuff. Anything one says could be fodder for some troll to claim he invented what you did because it "sounds similar." The judge in the Skyline v. Google case understood that "sounding similar" is not enough to prove infringement. Not all judges do.
Anyway, the solution to discussing "How Google Earth [Really] Works" is to stick to information that has already been disclosed in various forms, especially in Google’s own patents, of which there are relatively few. Fewer software patents is better for the world. But in this case, more patents would mean we could talk more openly about the technology, which, btw, was one of the original goals of patents — a trade of limited monopoly rights in exchange for a real public benefit:disclosure. But I digress…
For the more technically inclined, you may want to read these patents directly. Be warned: lawyers and technologists sometimes emulsify to form a sort of linguistic mayonnaise, a soul-deadening substance known as Patent English, or Painglish for short . If you’re brave, or masochistic, here you go:
1. Asynchronous Multilevel Texture Pipeline
2. Server for geospatially organized flat file data
There are also a few more loosely related Google patents. I don’t know why these are shouting, but perhaps because they’re very important to the field. I’ll hopefully get to these in more detail in future articles:
3. DIGITAL MAPPING SYSTEM
4. GENERATING AND SERVING TILES IN A DIGITAL MAPPING SYSTEM
5. DETERMINING ADVERTISEMENTS USING USER INTEREST INFORMATION AND MAP-BASED LOCATION INFORMATION
6. ENTITY DISPLAY PRIORITY IN A DISTRIBUTED GEOGRAPHIC INFORMATION SYSTEM (this one will be huge)
And there is this more informative technical paper from SGI (PDF) on hardware "clipmapping," which we’ll refer to later on. Michael Jones, btw, is one of the driving forces behind Google Earth, and as CTO, is still advancing the technology.
I’m going to stick closely to what’s been disclosed or is otherwise common technical knowledge. But I will hopefully explain it in a way that most humans can understand and maybe even appreciate. At least that’s my goal. You can let me know.
Big Caveat: the Google Earth code base has probably been rewritten several times since I was involved with Keyhole and perhaps even after these patents were submitted. Suffice it to say, the latest implementations may have changed significantly. And even my explanations are going to be so broad (and potentially out-dated) that no one should use this article as the basis for anything except intellectual curiosity and understanding.
Also note: we’re going to proceed in reverse, strange as it may seem, from the instant the 3D Earth is drawn on your screen, and later trace back to the time the data is served. I believe this will help explain why things are done as they are and why some other approaches don’t work nearly as well.
Part 1, The Result: Drawing a 3D Virtual Globe
There are two principal differences between Google Maps and Earth that inform how things should ideally work under the hood. The first is the difference between fixed-view (often top-down) 2D & free-perspective 3D rendering. The second is between real-time and pre-rendered graphics. These two distinctions are fading away as the products improve and converge. But they highlight important differences, even today.
What both have in common is that they begin with traditional digital photography — lots of it — basically one giant high-resolution (or multi-resolution) picture of the Earth. How they differ is largely in how they render that data.
Consider: The Earth is approximately 40,000 km around the waist. Whoever says it’s a small world is being cute. If you stored only one pixel of color data for every square kilometer of surface, a whole-earth image (flattened out in, say, a mercator projection) would be about 40,000 pixels wide and roughly half as tall. That’s far more than most 3D graphics hardware can handle today. We’re talking about an image of 800 megapixels and 2.4 gigabytes at least. Many PCs today don’t even have 2GB of main memory. And in terms of video RAM, needed to render, a typical PC has maybe 128MB, with a high-end gaming rig having upwards of 512.
And remember, this is just your basic run-of-the-mill one-kilometer-per-pixel whole-earth image. The smallest feature you could resolve with such an image is about 2 kilometers wide (thank you, Mr. Nyquist) — no buildings, rivers, roads, or people would be apparent. But for most major US cities, Google Earth deals in resolutions that can resolve objects as small as half a meter or less, at least four thousand times denser, or sixteen million times more storage than the above example.
We’re talking about images that would (and do) literally take many terabytes to store. There is no way that such a thing could ever be drawn on today’s PCs, especially not in real-time.
And yet it happens every time you run Google Earth.
Consider: In a true 3D virtual globe, you can arbitrarily tilt and rotate your view to pretty much look anywhere (except perhaps underground — and even that’s possible if we had the data). In all 3D globes, there exists some source data, typically, a really high-resolution image of the whole earth’s surface, or at least the parts for which the company bought data. That source data needs to be delivered to your monitor, mapped onto some virtual sphere or ideally onto small 3D surfaces (triangles, etc..) that mimic the real terrain, mountains, rivers and so on.
If you, as a software designer, decide not allow your view of the Earth to ever tilt or rotate, then congrats, you’ve simplified the engineering problem and can take some time off. But then you don’t have Google Earth.
Now, various schemes exist to allow one to "roam" part of this ridiculously large texture. Other mapping applications solve this in their own way, and often with significant limitations or visual artifacts. Most of them simply cut their huge Earth up into small regular tiles, perhaps arranged in a quadtree, and draw a certain number of those tiles on your screen at any given time, either in 2D (like Google Maps) or in 3D, like Microsoft’s Virtual Earth apparently does.
But the way Google Earth solved the problem was truly novel, and worthy of a software patent (and I am generally opposed to software patents). To explain it, we’ll have to build up a few core concepts. A background in digital signal theory and computer graphics never hurts, but I hope this will be easy enough that that won’t be necessary.
I’m not going to explain how 3D rendering works — that’s covered elsewhere. But I am going to focus on texture mapping and texture filtering in particular, because the details are vital to making this work. The progression from basic concepts to the more advanced texture filtering will also help you understand why things work this way, and just how amazing this technology really is. If you have the patience, here’s a very quick lesson in texture filtering.
The Basics
The problem of scaling, rotating and warping basic 2D images was solved a long time ago. The most common solution is called Bilinear Filtering. All that really means is that for each new (rotated, scaled, etc..) pixel you want to compute, you take the four "best" pixels from your source image and blend them together. It’s "bilinear" because it linearly blends two pixels at a time (along one axis), and then linearly blends those two results (along the other axis) for the final answer.
[A "linear blend," in case it's not clear, is butt simple: take 40% of color A, and 60% of color B and add them together. The 40/60 split is variable, depending on how "important" each contributor is, as long as the total adds up to 100%.]
That functionality is built into your 3D graphics hardware such that your computer can nowadays do literally billions of these calculations per second. Don’t ask me why your favorite paint program is so slow.
The problem being addressed can be visualized pretty easily — that’s what I love about computer graphics. It turns out, whenever we map some source pixels onto different (rotated, scaled, tilted, etc…) output pixels, visual information is lost.
The problem is called "aliasing" and it occurs because we digitally sampled the original image one way, at some given frequency (aka resolution), and now we’re re-sampling that digital data in some other way that doesn’t quite match up.
Now, when we talk about output pixels and destinations, it doesn’t much matter if the destination is a bitmap in a paint program or the 3D application window that shows the Earth. Aliasing happens whenever the output pixels do not line up with the sampling interval (frequency, resolution) of the source image. And aliasing makes for poor visual results. Dealing with aliasing is about half of what texture mapping is all about. The rest is mostly memory management. And the constraints of both inform how Google Earth works.
The mission then is to minimize aliasing through cleverness and good design. The best way to do this is to get as close as possible to a 1:1 correspondence between input and output pixels, or at least to generate so many extra pixels that we can safely down-sample the output to minimize aliasing (also known as "anti-aliasing"). We often do both.
Consider: for resizing images, it only gets worse — each pixel in your destination image might correspond to hundreds of pixels of source imagery, or vice-versa. Bilinear interpolation, remember, will only pick the best four source pixels and ignore the rest. So it can therefore skip right over important pixels, like edges, shadows, or highlights. If some such pixel is picked for blending during one frame and skipped over subsequently, you’ll get an ugly "pixel-popping" or scintillation effect. I’m sure you’ve seen it in some video games. Now you know why.
Tilting images (or any 3D transformation) is even more problematic, because now we have elements of scaling and rotation, but also a great variation in pixel density across rendered surfaces. For example, in the "near" part of a scene, your nice high-res ground image might be scaled up such that the pixels look blurry. In the "far" part of the scene, your image might appear scintillated (as above) because simple 2×2 bilinear interpolation is necessarily skipping important visual details from time to time.
![]()
Copyright, Microsoft Virtual Earth
Here’s an example of where a certain kind of texture filtering causes poor results. The text labels are hardly readable (why they’re painted into the terrain image at all is another issue). |
Better Filtering, Revealed
Most consumer 3D hardware already supports what’s called "tri-linear" filtering. With tri-linear and a closely coupled technique calledmip-mapping, the hardware computes and stores a series of lower resolution versions of your source image or texture map. Each mip-map is automatically down-sampled by a factor of 2, repeatedly, until we reach a 1×1 pixel image whose color is the average of all source image pixels.
So, for example, if you provided the hardware with a nice 512×512 source image, it would compute and store 8 extra mip-levels for you (256, 128, 64, 32, 16, 8, 4, 2, and 1 pixel square). If you stacked those vertically, you might more easily visualize the "mip-stack" as an upside down pyramid, where each mip-level (each horizontal slice) is always 1/2 the width of the one above.
Drawing of a mip-map
pyramid (not to scale).
The X depicts trilinear filtering sampling two mip-levels for an in-between pixel to reduce aliasing. |
During 3D rendering, mip-mapping and tri-linear filtering take each destination pixel, pick the two most appropriate mip-levels, essentially do a bi-linear blend on both, and then blend those two results again (linearly) for the final tri-linear answer.
So for example, say the next pixel would have no aliasing if only the source image had a resolution of 47.5 pixels across. The system has stored power of two mip maps (16, 32, 64…). So the hardware will cleverly use the 64×64 and 32×32 pixel versions closest to the desired sampling of 47.5, compute a bilinear (4-sample) result for each, and then take those two results and blend them a third time.
That’s tri-linear filtering in a nutshell, and along with mip-mapping, it goes a great distance to minimizing aliasing for many common cases of 3D transformations.
Remember: so far, we’ve been talking about nice, small images, like 512×512 pixels. Our whole-earth image will need to be millions of pixels across. So one might consider making a giant mip-map of our whole-earth image, at say one meter resolution. No problem, right? But you’ll realize fairly soon that would require a mip-map pyramid 26 levels deep, where the highest resolution mip-level is some 66 millionpixels across. That simply won’t fit on any 3D video card on the market, at least not in this decade.
I’m guessing Microsoft’s Virtual Earth gets around this limit by cutting their giant earth texture into many smaller distinct tiles of, say, 256 pixels square, where each gets mip-mapped individually. That approach would work to an extent, but it would be relatively slow and give some of the visual artifacts, like the blurring we see above, and a popping in and out of large square areas as you zoom in and out.
There’s one last concept about mip-maps to understand before we move on to the meat of the issue. Imagine for a moment that the pixels in the mip-map pyramid are actually color-coded as I’ve indicated above, with an entire layer colored red, another yellow, etc.. Drawing this on a tilted plane (like the Earth’s ground "plane") would then seem to "slice through" the pyramid at an interesting angle, using only those parts of the pyramid that are needed for this view.
It’s this characteristic of mip-mapping that allows Google Earth to exist, as we’ll see in a minute.
![]() A typical tilted Google Earth image (copyright & courtesy Google). |
![]() The same view, using color to show which mip levels inform which pixels |
The example on the left shows a normal 3D scene from Google Earth, as well as a rough diagram showing from where in the mip-stack a 3D hardware system might find the best source pixels, if they were so colorized.
The nearer area gets filled from the highest-resolution mip-level (red), dropping off to lower and lower resolutions as we get farther from the virtual point of view. This helps avoid the scintillation and other aliasing problems we talked about earlier, and looks quite nice. We get as close as possible to a 1:1 correspondence between source and destination, pixel for pixel, so aliasing is minimized.
Even better still, tri-linear filtering 3D graphics hardware has been improved with something called anisotropic filtering (a simple preference option in Google Earth) which is essentially the same core idea as the previous examples, but using non-square filters, beyond the basic 2×2. This is very important for visual quality, because even with fancy mip-mapping, if you tilt a textured polygon to a very oblique angle, the hardware must choose a low-resolution mip-level to avoid scintillation on the narrow axis. And that means the whole polygon is sampled at too-low a resolution, when it’s only one direction that needed to dip down to the low-res stuff. Suffice it to say, if your hardware supports anisotropic filtering, turn it on for best results. It’s worth every penny.
Now, to the meat of the issue
We still have to solve the problem of how to mip-map a texture with millions of pixels in either dimension. Universal Texture (in the Google Earth patent) solves the problem while still providing high quality texture filtering. It creates one giant multi-terabyte whole-earth virtual-texture in an extremely clever way. I can say that since I didn’t actually invent it. Chris Tanner figured out a way to do on your PC what had only ever been done on expensive graphics supercomputers with custom circuitry, called Clip Mapping (see SGI’spdf paper, also by Chris, Michael, et al., for a lot more depth on the original hardware implementation). That technology is essentially what made Google Earth possible. And my very first job on this project was making that work over an internet connection, way back when.
So how does it actually work?
Well, instead of loading and drawing that giant whole-earth texture all at once — which is impossible on most current hardware — and instead of chopping it up into millions of tiles and thereby losing the better filtering and efficiency we want, recall from just above that we typically only ever use a narrow slice or column of our full mip-map pyramid at any given time. The angle and height of this virtual column changes quite a bit depending on our current 3D perspective. And this usage pattern is fairly straightforward for a clever algorithm to compute or infer, knowing where you are and what the application is trying to draw.
A Universal Texture is both a mip-map, plus a software emulated clip-stack, meaning it can mimic a mip-map of many more levels and greater ultimate resolution than can fit in any real hardware.
Note: though this diagram doesn’t depict it as precisely as the paper, the clip stack’s "angle" shifts around to best keep the column centered.
|
So this clever algorithm figures out which sections of the larger virtual texture it needs at any given time and pages only those from system memory to your graphics card’s dedicated texture memory, where it can be drawn very efficiently, even in real-time.
The main modification to basic mip-mapping, from a conceptual point of view, is that the upside down pyramid is no longer just a pyramid, but is now much, much taller, containing a clipped stack of textures, called, oddly enough, a "clip stack," perhaps 16 to 30+ levels high. Conceptually, it’s as if you had a giant mip-map pyramid that’s 16-30 levels deep and millions to billions of pixels wide, but you clipped off the sides — i.e., the parts you don’t need right now.
Imagine the Washington monument, upside down and you’ll get the idea. In fact, imagine that tower leaning this way or that, like the one in Pisa, and you’ll be even closer. The tower leans in such a way that the pixels inside the tower are what you need for rendering right now. The rest is ignored.
Each clip-level is still twice the resolution of the one "below" it, like all mip-maps, and nice quality filtering still works as before. But since the clip stack is limited to a fixed but roaming footprint, say 512×512 pixels wide (another preference in Google Earth), that means that each clip-level is both twice the effective resolution and half the coverage area of the previous. That’s exactly what we want. We get all the benefits of a giant mip-map, with only the parts relevant to any given view.
Put another way, Google Earth cleverly and progressively loads high-res information for what’s at the focal "center" of your view (the red part above), and resolution drops off by powers of two from there. As you tilt and fly and watch the land run towards the horizon, Universal Texture is optimally sending only the best and most useful levels of detail to the hardware at any given time. What isn’t needed, isn’t even touched. That’s one thing that makes it ultra-efficient.
It’s also very memory-efficient. The total texture memory for an earth-sized texture is now (assuming this 512 wide base mip-map, and say 20 extra clip-levels of data) only about 17 megabytes, not the dozens to hundreds of terabytes we were threatened with before. It’s actually doable, and worked in 1999 on 3D hardware that had only 32 MB or less. Other techniques are only now becoming possible with bigger and bigger 3D cards.
In fact, with only 20 clip-levels (plus 9 mip levels for the base pyramid), we see that 229 yields a virtual texture capable of up to 536 million pixels in either dimension. Multiply that by 1/2 vertically, gives an virtual image of a few hundred terapixels in area, or enough excess capacity to represent features as small as 0.15 meters (about 5 inches), wherever the data is available. And that’s not the actual limit. I simply picked 20 clip levels as a reasonable number. And you thought the race for more megapixels on digital cameras was challenging. Multiply that by a million and you’re in the planetary ballpack.
Fortunately, for now, Google only really has to store a few dozenterapixels of imagery. The other beauty of the system is that the highest levels of resolution need not exist everywhere for this to work. Wherever the resolution is more limited, wherever there are gaps, missing data, etc.. the system only draws what it has. If there is higher resolution data available, it is fetched and drawn too. If not, the system uses the next lower resolution version of that data (see mip-mapping above) rather than drawing a blank. That’s exactly why you can zoom into some areas and see only a big blur, where other areas are nice and crisp. It’s all about data availability, not any hard limit on the 3D rendering. If the data were available, you could see centimeter resolution in the middle of the ocean.
The key then to making this all work is that, as you roam around the 3D Earth, the system can efficiently page new texture data from your local disk cache and system memory into your graphics texture memory. (We’ll cover some of how stuff gets into your local cache next time). You’ve literally been watching that texture uploading happen without necessarily realizing it. Hopefully, now you will appreciate all the hard work that went into making this all work so smoothly — like feeding an entire planet piecewise through a straw.
Finally, there’s one other item of interest before we move on. The reason this patent emphasizes asynchronous behavior is that these texture bits take some small but cumulative time to upload to your 3D hardware continuously, and that’s time taken away from drawing 3D images in a smooth, jitter-free fashion or handling easy user input — not to mention the hardware is typically busy with its own demanding schedule.
To achieve a steady 60 frames per second on most hardware, the texture uploading is divided into small, thin slices that very quickly update graphics video memory with the source data for whatever area you’re viewing, hopefully just before you need it, but at worst, just after. What’s really clever is that the system needs only upload the smallest parts of these textures that are needed and it does it without making anyone wait. That means rendering can be smooth and the user interface can be as fluid as possible. Without this asynchronicity, forget about those nice parabolic arcs from coast to coast.
Now, other virtual globes can also virtualize the whole-earth texture, perhaps they cut it into tiles, and even use multiple power-of-two resolutions like GE does. But without the Universal Texturingcomponent or something better, they’ll either be limited to 2D top-down rendering, or they’ll do 3D rendering with unsatisfying results, blurring, scintillation, and not nearly as good performance for streaming the data from the cache into texture memory for rendering.
And that’s probably more than you ever wanted to know about how the whole Earth is drawn on your screen each frame.
相关推荐
10. 非谓语动词的过去分词作状语:"No matter how frequently _____ the works of Beethoven always attract a large number of people."这里表示“贝多芬的作品被演奏”,用过去分词表被动,所以B. performed是正确...
Accept that, but you really should back up any original BIG files before saving. That磗 it! Now save your work, and you are done! It works exactly the same way for MEG files. ---------- W3D Viewer --...
Get ready to admire the beauty of true works of art, embodied in this 3D skinseyvere. Discovery 3D Screensaver takes you to the office of a medieval scholar. Install it right now and immerse yourself ...
- **用法示例**: The team works efficiently to complete projects on time. #### 6. Actual [ˈæktʃuəl] - **定义**: 形容词,实际存在的,真实的。 - **近义词**: Real, genuine - **用法示例**: The actual ...
OpenGL是一种强大的图形库,用于创建2D和3D图形,广泛应用于游戏开发、科学可视化、工程设计等领域。在这个项目中,我们看到一个基于OpenGL的机械臂运动仿真程序,它能够实现机械臂在四个方向上的旋转。这样的模拟对于理解机械臂的工作原理、机器人控制算法以及进行虚拟环境中的机械臂运动测试具有重要意义。 我们需要了解OpenGL的基础知识。OpenGL是一个跨语言、跨平台的编程接口,用于渲染2D和3D矢量图形。它提供了大量的函数来处理图形的绘制,包括几何形状的定义、颜色设置、光照处理、纹理映射等。开发者通过OpenGL库调用这些函数,构建出复杂的图形场景。 在这个机械臂仿真程序中,C#被用来作为编程语言。C#通常与Windows平台上的.NET Framework配合使用,提供了一种面向对象的、类型安全的语言,支持现代编程特性如LINQ、异步编程等。结合OpenGL,C#可以构建高性能的图形应用。 机械臂的运动仿真涉及到几个关键的计算和控制概念: 1. **关节角度**:机械臂的每个部分(或关节)都有一个或多个自由度,表示为关节角度。这些角度决定了机械臂各部分的位置和方向。 2. **正向运动学**:根据关节角度计算机械臂末端执行器(如抓手)在空间中的位置和方向。这涉及将各个关节的角度转换为欧拉角或四元数,然后转化为笛卡尔坐标系的X、Y、Z位置和旋转。 3. **反向运动学**:给定末端执行器的目标位置和方向,计算出各关节所需的理想角度。这是一个逆向问题,通常需要解决非线性方程组。 4. **运动规划**:确定从当前状态到目标状态的路径,确保机械臂在运动过程中避免碰撞和其他约束。 5. **OpenGL的使用**:在OpenGL中,我们首先创建几何模型来表示机械臂的各个部分。然后,使用矩阵变换(如旋转、平移和缩放)来更新关节角度对模型的影响。这些变换组合起来,形成机械臂的动态运动。 6. **四向旋转**:机械臂可能有四个独立的旋转轴,允许它在X、Y、Z三个轴上旋转,以及额外的绕自身轴线的旋转。每个轴的旋转都由对应的关节角度控制。 7. **交互控制**:用户可能可以通过输入设备(如鼠标或键盘)调整关节角度,实时观察机械臂的运动。这需要将用户输入转换为关节角度,并应用到运动学模型中。 8. **图形渲染**:OpenGL提供了多种渲染技术,如深度测试、光照模型、纹理映射等,可以用于提高机械臂模拟的真实感。例如,可以添加材质和纹理来模拟金属表面,或者使用光照来增强立体感。 这个项目结合了OpenGL的图形渲染能力与C#的编程灵活性,构建了一个可以直观展示机械臂运动的仿真环境。通过理解并实现这些关键概念,开发者不仅能够学习到图形编程技巧,还能深入理解机器人学的基本原理。
android11 udpate-engine 系统升级模块源码下载
内容概要:本文详细介绍了如何在MATLAB环境中实现SVM二分类算法,涵盖数据预处理、参数寻优及结果可视化的全过程。首先进行数据归一化处理,确保各特征在同一量纲下参与模型训练。接着采用网格搜索法对SVM的关键参数c(惩罚系数)和g(核参数)进行自动化寻优,利用5折交叉验证评估每组参数的表现。最后通过等高线图和3D曲面图直观展示参数与准确率之间的关系,并完成最终模型的训练与预测。 适合人群:具有一定MATLAB编程基础的研究人员和技术爱好者,尤其是从事机器学习、数据分析领域的从业者。 使用场景及目标:适用于需要快速搭建SVM二分类模型并进行参数调优的项目。主要目标是在短时间内获得较高准确度的分类结果,同时掌握SVM的工作原理及其在MATLAB中的具体应用方法。 其他说明:文中提供了完整的代码示例,便于读者直接上手实践。此外还提到了一些常见的注意事项,如数据格式要求、类别不平衡处理以及特征工程的重要性等。
ffmpeg
江科大CAN入门教程,万字长文理解
内容概要:本文详细介绍了基于新唐N79E814单片机的移动电源设计方案,涵盖硬件架构、PCB原理图、电路设计、代码实现等方面。移动电源主要由电池、充电电路和输出电路构成,文中重点讲解了5V1A和5V2.1A两路输出的设计思路,包括同步整流、PWM控制、充电管理等关键技术。同时,文章还探讨了PCB布局、烧录注意事项、效率优化等内容,并提供了具体的代码示例和调试建议。 适合人群:具有一定电子技术和单片机开发基础的工程师和技术爱好者。 使用场景及目标:适用于希望深入了解移动电源设计原理和实现方法的人群,旨在帮助读者掌握从原理图绘制到实际产品制作的全过程,提升电路设计和调试能力。 其他说明:文章不仅提供了理论知识,还包括大量实践经验分享,如常见的调试陷阱和解决方法,有助于读者在实践中少走弯路。
动漫角色分割_基于深度学习实现的高精度动漫角色分割算法_附项目源码_优质项目实战
javacv实现的支持多种音视频播放的播放器,比如MP4、avi、mkv、flv、MP3、ogg、wav、au等多种音视频格式,非常好用。
开发调试demo,简单的自动登录功能,插件开发入门参考
内容概要:本文详细介绍了通过修改Windows注册表来启用和配置被禁用的用户账户(如WDAGUtilityAccount)的过程。首先,通过计算机管理界面查看被禁用的用户账户,并进入注册表编辑器定位到HKEY_LOCAL_MACHINE\SAM\SAM\Domains\Account\Users路径下的相应用户条目。接着,通过对特定用户的二进制数据进行编辑,包括复制和修改关键字段,实现对被禁用账户的克隆与重新启用。最后,验证账户状态的变化,并通过远程桌面连接测试新配置的有效性。 适合人群:具备一定Windows系统管理基础的技术人员,尤其是负责企业内部网络和用户账户管理的IT管理员。 使用场景及目标:①当需要恢复或重新配置被禁用的用户账户时;②在进行系统故障排除或安全审计时,了解如何通过注册表直接操作用户账户;③确保特定用户能够正常登录并访问远程桌面服务。 阅读建议:本文涉及较为底层的系统操作,建议读者在实际操作前充分备份系统和注册表,避免误操作导致系统不稳定。同时,对于不熟悉注册表编辑的用户,应先在测试环境中练习,确保掌握相关技能后再应用于生产环境。此外,建议结合官方文档或其他权威资料,加深对Windows用户账户管理机制的理解。
2025免费微信小程序毕业设计成品,包括源码+数据库+往届论文资料,附带启动教程和安装包。 启动教程:https://www.bilibili.com/video/BV1BfB2YYEnS 讲解视频:https://www.bilibili.com/video/BV1BVKMeZEYr 技术栈:Uniapp+Vue.js+SpringBoot+MySQL。 开发工具:Idea+VSCode+微信开发者工具。
内容概要:本文详细介绍了基于西门子S7-1200 PLC的两部六层电梯控制系统的设计与实现。主要内容涵盖前期准备工作,如选择合适的PLC型号和配置硬件;核心逻辑部分深入讲解了梯形图编程的具体实现方法,包括楼层呼叫逻辑、电梯运行方向控制以及两部电梯之间的协同工作;此外,文章还探讨了仿真测试的方法及其重要性,提供了许多实用技巧和注意事项。通过具体实例展示了如何利用博途V15软件进行电梯系统的开发,并分享了一些实际操作中的经验和常见问题解决方案。 适合人群:从事工业自动化领域的工程师和技术人员,特别是那些对PLC编程有兴趣或者正在参与类似项目的从业者。 使用场景及目标:适用于需要理解和掌握S7-1200 PLC编程技能的人群,尤其是希望通过实际案例加深对梯形图编程理解的学习者。目标是在实践中提高编程能力,能够独立完成类似的工程项目。 其他说明:文中不仅包含了详细的理论解释,还有丰富的代码片段供读者参考。对于初学者而言,建议先从单部电梯开始练习,逐步过渡到复杂的双梯联调。同时,作者强调了仿真测试的重要性,指出这是验证程序正确性和优化性能的关键步骤。
2025免费微信小程序毕业设计成品,包括源码+数据库+往届论文资料,附带启动教程和安装包。 启动教程:https://www.bilibili.com/video/BV1BfB2YYEnS 讲解视频:https://www.bilibili.com/video/BV1BVKMeZEYr 技术栈:Uniapp+Vue.js+SpringBoot+MySQL。 开发工具:Idea+VSCode+微信开发者工具。
内容概要:该资源为大学英语四级听力练习音频 MP3,包含丰富多样的听力素材。涵盖四级考试常见的各类场景,如校园生活(课程学习、社团活动等)、日常社交(聚会、聊天等)、工作求职(面试、职场事务等)、旅行交通(出行方式、景点介绍等)、饮食健康(餐厅点餐、健康养生等)。音频内容依照四级听力考试题型和难度精心录制,有短对话、长对话、短文听力等形式,且语速、口音等符合四级考试要求,助力考生熟悉考试形式与节奏。 适合人群:正在备考大学英语四级考试,希望提升听力水平的学生;英语基础中等,需要通过针对性练习来适应四级听力难度、提升听力理解能力的学习者;对英语听力学习有需求,想通过大量练习积累场景词汇、熟悉英语表达习惯的人群。 能学到什么:①熟悉四级听力考试的各类场景词汇,增强词汇储备并提升在听力语境中的反应速度;②掌握不同场景下的英语常用表达和句式,提升英语语言运用能力;③锻炼听力理解技巧,如抓取关键词、推断隐含意思、梳理篇章逻辑等;④适应四级听力考试的语速、口音和题型设置,增强应试能力和自信心。 阅读建议:制定系统的练习计划,定期定量进行听力练习,如每天安排 30 - 60 分钟;第一遍泛听,了解大致内容和主题;第二遍精听,逐句听写或分析不懂的词汇和句子;对照听力原文,明确错误和没听懂的地方,积累生词和表达;定期进行模拟测试,利用该音频模拟考试环境,检验学习效果并调整学习策略。
2000-2017年各省天然气消费量数据 1、时间:2000-2017年 2、来源:国家统计j、能源nj 3、指标:行政区划代码、城市、年份、天然气消费量 4、范围:31省
内容概要:本文详细介绍了基于西门子PLC1200的自动化控制系统,涵盖了PLC与库卡机器人通过Profinet通讯、PTO模式控制松下伺服、36路模拟量处理(包括压力检测、位置检测及压力输出)、以及26个温控器通过485总线通讯的关键技术和实现方法。此外,还包括了昆仑通态触摸屏的人机交互界面设计,提供了详细的硬件组态、软件编程指导和设备操作说明。 适合人群:从事工业自动化领域的工程师和技术人员,尤其是那些负责多设备协同控制项目的设计和实施的专业人士。 使用场景及目标:适用于需要整合多种设备(如PLC、机器人、伺服系统、温控器等)的复杂自动化生产线。主要目标是提高生产效率、增强系统的稳定性和可靠性,同时降低维护成本。 其他说明:文中不仅提供了具体的编程实例和硬件配置指南,还分享了许多实际调试过程中积累的经验教训,有助于读者在实际应用中少走弯路。