PART I
This is the first in a series of posts on this topic.
The one question I get more than any other is "How does Google test?"
It's been explained in bits and pieces on this blog but the explanation
is due an update. The Google testing strategy has never changed but the
tactical ways we execute it has evolved as the company has evolved.
We're now a search, apps, ads, mobile, operating system, and so on and so forth
company. Each of these Focus Areas (as we call them) have to do things
that make sense for their problem domain. As we add new FAs and grow the
existing ones, our testing has to expand and improve. What I am
documenting in this series of posts is a combination of what we are
doing today and the direction we are trending toward in the foreseeable
future.
Let's begin with organizational structure and it's one that might
surprise you. There isn't an actual testing organization at Google. Test
exists within a Focus Area called Engineering Productivity. Eng Prod
owns any number of horizontal and vertical engineering disciplines, Test
is the biggest. In a nutshell, Eng Prod is made of:
1. A product team
that produces internal and open source
productivity tools that are consumed by all walks of engineers across
the company. We build and maintain code analyzers, IDEs, test case
management systems, automated testing tools, build systems, source
control systems, code review schedulers, bug databases... The idea is to
make the tools that make engineers more productive. Tools are a very
large part of the strategic goal of prevention over detection.
2. A services team
that provides expertise to Google product
teams on a wide array of topics including tools, documentation, testing,
release management, training and so forth. Our expertise covers
reliability, security, internationalization, etc., as well as
product-specific functional issues that Google product teams might face.
Every other FA has access to Eng Prod expertise.
3. Embedded engineers
that are effectively loaned out to Google
product teams on an as-needed basis. Some of these engineers might sit
with the same product teams for years, others cycle through teams
wherever they are needed most. Google encourages all its engineers to
change product teams often to stay fresh, engaged and objective. Testers
are no different but the cadence of changing teams is left to the
individual. I have testers on Chrome that have been there for several
years and others who join for 18 months and cycle off. Keeping a healthy
balance between product knowledge and fresh eyes is something a test
manager has to pay close attention to.
So this means that testers report to Eng Prod managers but identify
themselves with a product team, like Search, Gmail or Chrome.
Organizationally they are part of both teams. They sit with the product
teams, participate in their planning, go to lunch with them, share in
ship bonuses and get treated like full members of the team. The benefit
of the separate reporting structure is that it provides a forum for
testers to share information. Good testing ideas migrate easily within
Eng Prod giving all testers, no matter their product ties, access to the
best technology within the company.
This separation of project and reporting structures has its challenges.
By far the biggest is that testers are an external resource. Product
teams can't place too big a bet on them and must keep their quality
house in order. Yes, that's right: at Google it's the product teams that
own quality, not testers. Every developer is expected to do their own
testing. The job of the tester is to make sure they have the automation
infrastructure and enabling processes that support this self reliance.
Testers enable developers to test.
What I like about this strategy is that it puts developers and testers
on equal footing. It makes us true partners in quality and puts the
biggest quality burden where it belongs: on the developers who are
responsible for getting the product right. Another side effect is that
it allows us a many-to-one dev-to-test ratio. Developers outnumber
testers. The better they are at testing the more they outnumber us.
Product teams should be proud of a high ratio!
Ok, now we're all friends here right? You see the hole in this strategy I
am sure. It's big enough to drive a bug through. Developers can't test!
Well, who am I to deny that? No amount of corporate kool-aid could get
me to deny it, especially coming off
my GTAC talk
last year where I pretty much made a game of developer vs. tester (spoiler alert: the tester wins).
Google's answer is to split the role. We solve this problem by having
two types of testing roles at Google to solve two very different testing
problems. In my next post, I'll talk about these roles and how we split
the testing problem into two parts.
PART II
By James Whittaker
In order for the “you build it, you break it” motto to be real, there
are roles beyond the traditional developer that are necessary.
Specifically, engineering roles that enable developers to do testing
efficiently and effectively have to exist. At Google we have created
roles in which some engineers are responsible for making others more
productive. These engineers often identify themselves as testers but
their actual mission is one of productivity. They exist to make
developers more productive and quality is a large part of that
productivity. Here's a summary of those roles:
The SWE
or Software Engineer
is the traditional developer
role. SWEs write functional code that ships to users. They create design
documentation, design data structures and overall architecture and
spend the vast majority of their time writing and reviewing code. SWEs
write a lot of test code including test driven design, unit tests and,
as we explain in future posts, participate in the construction of small,
medium and large tests. SWEs own quality for everything they touch
whether they wrote it, fixed it or modified it.
The SET
or Software Engineer in Test
is also a developer
role except their focus is on testability. They review designs and look
closely at code quality and risk. They refactor code to make it more
testable. SETs write unit testing frameworks and automation. They are a
partner in the SWE code base but are more concerned with increasing
quality and test coverage than adding new features or increasing
performance.
The TE
or Test Engineer
is the exact reverse of the SET.
It is a a role that puts testing first and development second. Many
Google TEs spend a good deal of their time writing code in the form of
automation scripts and code that drives usage scenarios and even mimics a
user. They also organize the testing work of SWEs and SETs, interpret
test results and drive test execution, particular in the late stages of a
project as the push toward release intensifies. TEs are product
experts, quality advisers and analyzers of risk.
From a quality standpoint, SWEs own features and the quality of those
features in isolation. They are responsible for fault tolerant designs,
failure recovery, TDD, unit tests and in working with the SET to write
tests that exercise the code for their feature.
SETs are developers that provide testing features. A framework that can
isolate newly developed code by simulating its dependencies with stubs,
mocks and fakes and submit queues for managing code check-ins. In other
words, SETs write code that allows SWEs to test their features. Much of
the actual testing is performed by the SWEs, SETs are there to ensure
that features are testable and that the SWEs are actively involved in
writing test cases.
Clearly SETs primary focus is on the developer. Individual feature
quality is the target and enabling developers to easily test the code
they write is the primary focus of the SET. This development focus
leaves one large hole which I am sure is already evident to the reader:
what about the user?
User focused testing is the job of the Google TE. Assuming that the SWEs
and SETs performed module and feature level testing adequately, the
next task is to understand how well this collection of executable code
and data works together to satisfy the needs of the user. TEs act as a
double-check on the diligence of the developers. Any obvious bugs are an
indication that early cycle developer testing was inadequate or sloppy.
When such bugs are rare, TEs can turn to their primary task of ensuring
that the software runs common user scenarios, is performant and secure,
is internationalized and so forth. TEs perform a lot of testing and
test coordination tasks among TEs, contract testers, crowd sourced
testers, dog fooders, beta users, early adopters. They communicate among
all parties the risks inherent in the basic design, feature complexity
and failure avoidance methods. Once TEs get engaged, there is no end to
their mission.
Ok, now that the roles are better understood, I'll dig into more details
on how we choreograph the work items among them. Until next
time...thanks for your interest.
PART III
By James Whittaker
Lots of questions in the comments to the last two posts. I am not
ignoring them. Hopefully many of them will be answered here and in
following posts. I am just getting started on this topic.
At Google, quality is not equal to test. Yes I am sure that is true
elsewhere too. “Quality cannot be tested in” is so cliché it has to be
true. From automobiles to software if it isn’t built right in the first
place then it is never going to be right. Ask any car company that has
ever had to do a mass recall how expensive it is to bolt on quality
after-the-fact.
However, this is neither as simple nor as accurate as it sounds. While
it is true that quality cannot be tested in, it is equally evident that
without testing it is impossible to develop anything of quality. How
does one decide if what you built is high quality without testing it?
The simple solution to this conundrum is to stop treating development
and test as separate disciplines. Testing and development go hand in
hand. Code a little and test what you built. Then code some more and
test some more. Better yet, plan the tests while you code or even
before. Test isn’t a separate practice, it’s part and parcel of the
development process itself. Quality is not equal to test; it is achieved
by putting development and testing into a blender and mixing them until
one is indistinguishable from the other.
At Google this is exactly our goal: to merge development and testing so
that you cannot do one without the other. Build a little and then test
it. Build some more and test some more. The key here is who is doing the
testing. Since the number of actual dedicated testers at Google is so
disproportionately low, the only possible answer has to be the
developer. Who better to do all that testing than the people doing the
actual coding? Who better to find the bug than the person who wrote it?
Who is more incentivized to avoid writing the bug in the first place?
The reason Google can get by with so few dedicated testers is because
developers own quality. In fact, teams that insist on having a large
testing presence are generally assumed to be doing something wrong.
Having too large a test team is a very strong sign that the code/test
mix is out of balance. Adding more testers is not going to solve
anything.
This means that quality is more an act of prevention than it is
detection. Quality is a development issue, not a testing issue. To the
extent that we are able to embed testing practice inside development, we
have created a process that is hyper incremental where mistakes can be
rolled back if any one increment turns out to be too buggy. We’ve not
only prevented a lot of customer issues, we have greatly reduced the
number of testers necessary to ensure the absence of recall-class bugs.
At Google, testing is aimed at determining how well this prevention
method is working. TEs are constantly on the lookout for evidence that
the SWE-SET combination of bug writers/preventers are screwed toward the
latter and TEs raise alarms when that process seems out of whack.
Manifestations of this blending of development and testing are all over
the place from code review notes asking ‘where are your tests?’ to
posters in the bathrooms reminding developers about best testing
practices, our infamous Testing On The Toilet guides. Testing must be an
unavoidable aspect of development and the marriage of development and
testing is where quality is achieved. SWEs are testers, SETs are testers
and TEs are testers.
If your organization is also doing this blending, please share your
successes and challenges with the rest of us. If not, then here is a
change you can help your organization make: get developers fully vested
in the quality equation. You know the old saying that chickens are happy
to contribute to a bacon and egg breakfast but the pig is fully
committed? Well, it's true...go oink at one of your developer and see if
they oink back. If they start clucking, you have a problem.
分享到:
相关推荐
《Google软件测试之道》抓住了Google做测试的本质,抓住了Google测试这个时代最复杂软件的精华。《Google软件测试之道》描述了测试解决方案,揭示了测试架构是如何设计、实现和运行的,介绍了软件测试工程师的角色;...
《Google软件测试之道》是Google内部测试实践的权威指南,旨在分享其在软件质量保障方面的经验与智慧。这本书深入探讨了Google如何构建高效、可靠的测试框架,以确保其产品的高质量和稳定性。书中不仅包含了测试的...
其中,探索性测试是Google测试策略中重要的一环,它强调测试人员在没有明确测试用例的情况下,凭借个人经验和创造力来发现软件中的问题。 此外,书中还提到了测试工具的重要性,它们对于提高测试效率和质量起到了...
谷歌的gtest单元测试框架是一个广泛使用的开源工具,尤其在跨平台的软件开发中扮演着重要角色。这个框架为C++编程语言提供了强大的测试能力,帮助开发者确保代码的正确性和可靠性。gtest的核心理念是通过定义断言...
我听到的最多的一个问题是“Google如何做测试?”这个问题已经在本博客上零零散散的回答过,但这次的回答将会做一个完整的更新。Google的测试策略从来没有变过,我们执行测试的策略随着公司的演化而演化。我们现在是...
GOOGLE软件测试之道,可以让你深入了解测试行业的经典案例
《Google软件测试之道》中文版,mobi格式,支持下发到kindle阅读。
自动化测试版113.0.5672.0是谷歌浏览器的一个特殊版本,主要针对开发者和测试工程师,用于在发布正式版本之前验证功能、性能和兼容性。这个版本包含了适用于三种操作系统——Linux、Windows 32/64位以及macOS的...
谷歌测试框架,通常被称为Google Test或gtest,是Google开发的一款开源C++测试框架,用于编写单元测试。这个框架使得开发者可以方便地为自己的C++代码编写可自动化执行的测试用例,确保代码的质量和功能正确性。其...
### 谷歌软件测试之道 #### 知识点概览 《谷歌软件测试之道》是一本深入探讨谷歌如何进行软件测试的专业书籍。本书由James Whittaker撰写,旨在分享谷歌在软件测试领域的最佳实践、创新技术和管理理念,为读者提供...
Google Test(gtest)是由Google提供的一个C++单元测试框架,广泛应用于C++程序的测试中。本系列文档将详细介绍如何使用gtest,包括其下载、安装、配置以及编写测试案例。文档分为多个部分,涵盖了gtest的基本使用...
Googletest 基于 xUnit 测试框架,这是一种流行的单元测试架构 测试发现: Googletest 会自动发现并运行您的测试,无需 手动注册测试 丰富的断言集: Googletest 提供了多种断言,例如相等、不平等、 异常等,使测试...
标题中的“谷歌接口测试工具”指的是Google提供的用于测试API接口的工具。在IT行业中,接口测试是软件测试的重要部分,主要是验证系统间的交互是否按预期工作。这些工具可以帮助开发者和测试人员确保服务之间的数据...
2. **Q:** 是否可以使用非用户版本的 Android 系统进行测试? - **A:** 不可以,只有运行用户版本的 Android 系统才能正确执行 GTS 测试。 3. **Q:** 测试过程中遇到错误怎么办? - **A:** 可以查看错误日志并尝试...
googletest测试用例