31,19 €
When you are developing on the Microsoft platform, Visual Studio 2010 offers you a range of powerful tools and makes the whole process easier and faster. After learning it, if you are think that you can sit back and relax, you cannot be further away from truth. To beat the crowd, you need to be better than others, learn tips and tricks that other don't know yet. This book is a compilation of the best practices of programming with Visual Studio.
Visual Studio 2010 best practices will take you through the practices that you need to master programming with .NET Framework. The book goes on to detail several practices involving many aspects of software development with Visual Studio. These practices include debugging and exception handling and design. It details building and maintaining a recommended practices library and the criteria by which to document recommended practices
The book begins with practices on source code control (SCC). It includes different types of SCC and discusses how to choose them based on different scenarios. Advanced syntax in C# is then covered with practices covering generics, iterator methods, lambdas, and closures.
The next set of practices focus on deployment as well as creating MSI deployments with Windows Installer XML (WiX)óincluding Windows applications and services. The book then takes you through practices for developing with WCF and Web Service.
The software development lifecycle is completed with practices on testing like project structure, naming, and the different types of automated tests. Topics like test coverage, continuous testing and deployment, and mocking are included. Although this book uses Visual Studio as example, you can use these practices with any IDE.
Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:
Seitenzahl: 377
Copyright © 2011 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.
First published: August 2012
Production Reference: 1170812
Published by Packt Publishing Ltd.
Livery Place
35 Livery Street
Birmingham B3 2PB, UK.
ISBN 978-1-84968-716-4
www.packtpub.com
Cover Image by Sandeep Babu (<[email protected]>)
Author
Peter Ritchie
Reviewers
Ognjen Bajic
Carlos Hulot
Ahmed Ilyas
Ken Tucker
Acquisition Editor
Rashmi Phadnis
Lead Technical Editor
Dayan Hyames
Technical Editors
Manmeet Singh Vasir
Merin Jose
Manasi Poonthottam
Project Coordinator
Joel Goveya
Proofreader
Joel T. Johnson
Indexer
Rekha Nair
Graphics
Valentina D,silva
Manu Joseph
Production Coordinators
Aparna Bhagat
Nitesh Thakur
Cover Work
Aparna Bhagat
Nitesh Thakur
Peter Ritchie is a software development consultant. He is the president of Peter Ritchie Inc. Software Consulting Co., a software consulting company in Canada,s National Capital Region, which specializes in Windows-based software development management, process, and implementation consulting.
Peter has worked with clients such as Mitel, Nortel, Passport Canada, and Innvapost, from mentoring, to architecture, to implementation. He has considerable experience in building software development teams and working with startups towards agile software development. Peter,s experience ranges from designing and implementing simple stand-alone applications, to architecting n-tier applications spanning dozens of computers, and from C++ to C#.
Peter is active in the software development community, attending and speaking at various events, as well as authoring various works including Refactoring with Microsoft Visual Studio 2010, Packt Publishing.
There are countless number of people that have contributed to my knowledge and motivation to contribute to the community with projects like this book. In particular, I would like to thank Joe Miller for his sharp eyes and having clearly better editing abilities than mine.
I would also like to thank my wife Sherry for the continued love and support despite all the extra time I had to put into projects like book writing.
I would also like to thank my parents, Helen and Bruce; I still miss you.
Carlos Hulot has been working in the IT area for more than 20 years in different capabilities, from software development, project management, to IT marketing, product development, and management. He has worked for multinational companies such as Royal Philips Electronics, Pricewaterhouse Coopers, and Microsoft.
Carlos currently works as an independent IT consultant. He is also a Computer Science lecturer at two Brazilian universities. Carlos holds a Ph.D. in Computer Science and Electronics from the University of Southampton, UK and a B.Sc. in Physics from University of São Paulo, Brazil.
Ahmed Ilyas has a BEng degree from Napier University in Edinburgh, Scotland, having majored in Software development. He has 15 years of professional experience in software development.
After leaving Microsoft, Ahmed ventured into setting up his consultancy company Sandler Ltd. (UK), offering the best possible solutions for a magnitude of industries, and providing real-world answers to those problems. The company uses the Microsoft stack to build these technologies. Being able to bring in the best practices, patterns, and software to its client base for enabling long term stability and compliance in the ever changing software industry, pushing the limits in technology, as well as improving software developers around the globe.
Ahmed has been awarded the MVP in C# by Microsoft three times, for providing excellence and independent real-world solutions to problems that developers face.
Ahmed,s breadth and depth of knowledge has been obtained from his research and from the valuable wealth of information and research at Microsoft. By knowing the fact that 90 percent of the world uses at least one form of Microsoft technology, motivates and inspires him.
Ahmed has worked for a number of clients and employers. With the great reputation that he has, it has resulted in having a large client base for his consultancy company, which includes clients from different industries. From media to medical and beyond. Some clients have included him on their "approved contractors/consultants" list. The list includes ICS Solution Ltd. (placed on their DreamTeam portal) and also EPS Software Corp. (based in the USA).
I would like to thank the author and the publisher for giving me the opportunity to review this book. I would also like to thank my client base and especially my colleagues at Microsoft for enabling me to become a reputable leader as a software developer in the industry, which is my passion.
Ken Tucker is a Microsoft MVP (2003 present) in Visual Basic and currently works at Amovius LLC in Melbourne, Florida (FL). He is also the President of the Space Coast .Net User Group and a frequent speaker at Florida Code Camps. Ken be reached at <[email protected]>.
I'd like to thank my wife Alice-Marie.
You might want to visit www.PacktPub.com for support files and downloads related to your book.
Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at <[email protected]> for more details.
At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters and receive exclusive discounts and offers on Packt books and eBooks.
http://PacktLib.PacktPub.com
Do you need instant solutions to your IT questions? PacktLib is Packt,s online digital book library. Here, you can access, read and search across Packt,s entire library of books.
If you have an account with Packt at www.PacktPub.com, you can use this to access PacktLib today and view nine entirely free books. Simply use your login credentials for immediate access.
Get notified! Find out when new books are published by following @PacktEnterprise on Twitter, or the Packt Enterprise Facebook page.
When you are developing on the Microsoft platform, Visual Studio 2010 offers you a range of powerful tools and makes the entire process easier and faster. After learning it, if you think that you can sit back and relax, you cannot be further away from truth. To beat the crowd, you need to be better than others, learn tips and tricks that other don’t know yet. This book is a compilation of the best practices of programming with Visual Studio.
Visual Studio 2010 Best Practices will take you through the practices you need to master programming with the .NET Framework. The book goes on to detail several practices involving many aspects of software development with Visual Studio. These practices include debugging, exception handling, and design. It details building and maintaining a recommended practices library and the criteria by which to document recommended practices.
The book begins with practices on source code control (SCC). It includes different types of SCC and discusses how to choose them based on different scenarios. Advanced syntax in C# is then covered with practices covering generics, iterator methods, lambdas, and closures.
The next set of practices focus on deployment, as well as creating MSI deployments with Windows Installer XML (WiX), including Windows applications and services. The book then takes you through practices for developing with WCF and Web Service.
The software development lifecycle is completed with practices on testing, such as project structure, naming, and the different types of automated tests. Topics such as test coverage, continuous testing and deployment, and mocking are included. Although this book uses Visual Studio as an example, you can use these practices with any IDE.
Chapter 1, Working with Best Practices, discusses several motivating factors about why we might want to use "recommended practices" and why we’re sometimes forced to resort to "recommended practices" rather than figure it out.
Chapter 2, Source Code Control Practices, looks at source code control terminology, architectures, and usage practices.
Chapter 3, Low-level C# Practices, looks at some low-level, language-specific practices. Topics like generics, lambdas, iterator members, extension methods, and exception handling will be detailed.
Chapter 4, Architectural Practices, looks at some architecture-specific practices. These practices will include things such as decoupling, data-centric applications, and a brief look at some recommendations for distributed architectures.
Chapter 5, Recommended Practices for Deployment, discusses installation technologies and covers some of the more common features required by the majority application installations. The chapter focuses mainly on deployment of applications through Windows Installer
Chapter 6, Automated Testing Practices, covers automated testing practices. Practices regarding test naming and structure, coverage, mocking, and types of tests will be covered.
Chapter 7, Optimizing Visual Studio, discusses ways of making Visual Studio operate more efficiently, work to our advantage, and ways to make working with Visual Studio friendlier.
Chapter 8, Parallelization Practices, discusses techniques such as threading, distributed architecture, and thread synchronization. Technologies such as Task Parallel Library, Asynchronous CTP, and asynchronous additions to C# 5.0 and Visual Basic 10 are also covered.
Chapter 9, Distributed Applications, discusses ways of architecting distributed applications, as well as specific technologies that help communication of nodes within a distributed application. In addition, it covers ways of debugging, monitoring, and maintaining distributed applications.
Chapter 10, Web Service Recommended Practices, discusses web services. It covers practices with WCF services, ASMX services, implementing services, consuming services, and authentication and authorization.
.NET developers using Visual Studio for programming will find this book useful. If you are developing your application with C#, you will find better ways to do things with Visual Studio.
You should know basics of development with the .NET Framework and will need working knowledge on Visual Studio.
Feedback from our readers is always welcome. Let us know what you think about this book—what you liked or may have disliked. Reader feedback is important for us to develop titles that you really get the most out of.
To send us general feedback, simply send an e-mail to <[email protected]>, and mention the book title through the subject of your message.
If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, see our author guide on www.packtpub.com/authors.
Now that you are the proud owner of a Packt book, we have a number of things to help you to get the most from your purchase.
You can download the example code files for all Packt books you have purchased from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.
Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find a mistake in one of our books—maybe a mistake in the text or the code—we would be grateful if you would report this to us. By doing so, you can save other readers from frustration and help us improve subsequent versions of this book. If you find any errata, please report them by visiting http://www.packtpub.com/support, selecting your book, clicking on the errata submission form link, and entering the details of your errata. Once your errata are verified, your submission will be accepted and the errata will be uploaded to our website, or added to any list of existing errata, under the Errata section of that title.
Piracy of copyright material on the Internet is an ongoing problem across all media. At Packt, we take the protection of our copyright and licenses very seriously. If you come across any illegal copies of our works, in any form, on the Internet, please provide us with the location address or website name immediately so that we can pursue a remedy.
Please contact us at <[email protected]> with a link to the suspected pirated material.
We appreciate your help in protecting our authors, and our ability to bring you valuable content.
You can contact us at <[email protected]> if you are having a problem with any aspect of the book, and we will do our best to address it.
In any given software developer's career, there are many different things they need to create. Of the software they need to create, given time constraints and resources, it's almost impossible for them to perform the research involved to produce everything correctly from scratch.
There are all sorts of barriers and roadblocks to researching how to correctly write this bit of code or that bit of code, use that technology, or this interface. Documentation may be lacking or missing, or documentation may be completely wrong. Documentation is the same as software, sometimes it has bugs. Sometimes the act of writing software is unit testing documentation. This, of course, provides no value to most software development projects. It's great when the documentation is correct, but when it's not, it can be devastating to a software project.
Even with correct documentation, sometimes we don't have the time to read all of the documentation and become total experts in some technology or API. We just need a subset of the API to do what we need done and that's all.
I call them "recommended practices" instead of "best practices." The superlative "best" implies some degree of completeness. In almost all circumstances, the completeness of these practices has a shelf-life. Some best practices have a very small shelf-life due to the degree to which technology and our knowledge of it changes.
Recommended practices detail working with several different technologies with a finite set of knowledge. Knowledge of each technology will increase in the future, and each technology will evolve in the future. Thus, what may be a best practice today may be out of date, obsolete, and possibly even deprecated sometime in the future.
One of the problems I've encountered with "best practices" is the inferred gospel people assume from best. They see "best" and assume that means "best always and forever." In software, that's rarely the case. To a certain extent, the Internet hasn't helped matters either. Blogs, articles, answers to questions, and so on, are usually on the Internet forever. If someone blogs about a "best practice" in 2002 it may very well have been the recommended approach when it was posted, but may be the opposite now. Just because a practice works doesn't make it a best practice.
Sometimes the mere source of a process, procedure, or coding recipe has the reader inferring "best practice." This is probably one of the most disturbing trends in certain software communities. While a source can be deemed reliable, not everything that a source presents was ever intended to be a "best practice", documentation at best. Be wary of accepting code from reputable sources as "best practices." In fact, read on to get some ideas on how to either make that code one of your recommended practices, or refute it as not being a best practice at all.
Further, some industries or organizations define business practices. They're defined as the one and only practice and sometimes referred to as "best" because there is nothing to compare. I would question the use of "best" in such a way because it implies comparison with at least one other practice, and that other practice was deemed insufficient in some way. To that end, in software practices, just because there is only one known way to do something, that doesn't mean it should be coined a "best practice."
There have been many other people who have questioned "best" in "best practice." Take Scott Ambler for example. Scott is a leader in the agile software development community. He is espousing "contextual practices" as any given "best practice" is limited at least to one context. As we'll see shortly a "best practice" may be good in one context but bad in another context.
"Best" is a judgment. While the reader of the word "best" judges a practice as best through acceptance, in the general case, most "best practices" haven't really been judged. For a practice to be best the practice needs to be vetted, and the requisite work involved in proving how and why the practice is best is never done. It's this very caveat that make people like Eugene Bardach question "best practices" as a general concept. In his article The Problem with "Best Practice", Bardach suggests terms like "good" or "smart." But that's really the same problem. Who vets "good" or "smart?" At best they could be described as "known practices."
Without vetting, it's often taken at face value by the reader based either solely on the fact that "best" was used, or based on the source of the practice. This is why people like Ambler and Bardach are beginning to openly question the safety of calling something a "best practice."
Most practices are simply a series of steps to perform a certain action. Most of the time, context is either implied or the practice is completely devoid of context. It leaves the reader with the sense that the context is anywhere, which is dangerous.
There is no point to using practices if they don't add any value. It's important to understand at least some of the benefits that can be obtained from using practices. Let's have a look at some of the common practices.
We can sometimes find good documentation. It describes the API or technology correctly and includes sample code. Sample code helps us understand the API as well as the concepts. I don't know about you, but I think in code; sample code is often easier for me to understand than prose, but , sample code is a double-edged sword.
One drawback of sample code is it may have the side-effects you're looking for, so you take it at face value and re-use it in your code. This is a form of pragmatic re-use.
Pragmatic re-use is when a developer re-uses code in a way which the original code was not intended to be re-used. This is quite common, and one of the most common forms of pragmatic re-use is copying and pasting code, such as copying and pasting the sample code as shown earlier.
In C#, classes are open for derivation unless they are modified with the sealed keyword to prevent inheritance. The lack of modification with sealed doesn't necessarily imply that the class is intended to be derived from. Deriving from a class like this is another form of pragmatic re-use because it's being re-used where re-use was not expected.
There are many motivators for pragmatic re-use. When a developer has neither the time nor the resources to learn code to perform a certain task, they often resort to a form of pragmatic re-use such as copy and paste.
Technical debt is a fairly well understood concept, but, it bears repeating as one of the potential motivators of best practices. Technical debt refers to the negative consequences of code, design, or architecture. There are all sorts of negative consequences that can occur from code. One common consequence is from code with no test coverage. The negative consequence of this is the lack of stability introduced from any change to that code.
Pragmatic re-use has the side-effect of taking on technical debt. At the very least, the code is doing something in a way it was never intended. This means it was not designed to do that, and therefore could never have been tested to work correctly in that scenario. The most common impetus for pragmatic re-use is that the developer either didn't understand how to do it himself, or didn't understand the original code. This means there is code in the code base that potentially no one understands. This means they don't understand why it works, how to test it correctly, what to do if something goes wrong with the code, or how to correctly change in response to changing requirements.
To be clear, technical debt isn't always bad. A team can take on technical debt for a variety of reasons. The important part is that they know the consequences and are willing to live with those consequences, maybe for a short period of time, to get some sort of benefit. This benefit could be time-to-market, proof-of-concept (maybe directly related to funding) meeting a deadline, budget, and so on.
There are all sorts of great sources of information on managing technical debt, so we won't get into technical debt beyond its impetus behind using best practices. If you're not clear on technical debt, I recommend as an exercise for the reader to learn more about it. Perhaps Martin Fowler's bliki (http://martinfowler.com/bliki/TechnicalDebt.html) or Steve McConnell's blog (http://blogs.construx.com/blogs/stevemcc/archive/2007/11/01/technical-debt-2.aspx) would be a good start.
Not invented here (NIH) syndrome has become much more understood over the past decade or so. There was a time when there were a handful of developers in the world developing software. Most knew development teams needed to figure out how to write basic data structures such as linked lists, and basic sorting algorithms such as quick sort, or how to perform spell checking. This knowledge wasn't generally available and componentization of software had yet to occur. The value of their project was overshadowed by the sheer complexity of the infrastructure around producing effective software.
Shoot ahead in time slightly into an era of componentized software. Components, libraries, APIs, and frameworks began to appear, that took the infrastructure-like aspects of a software project, and made sharable components that anyone within reason could simply drop into their project and start using. Presumably, the time required to understand and learn the API would be less than having to write that component from scratch.
To a select few people this isn't the case. Their ability to write software is at such a high level that for them to understand and accept an API was a larger friction (they thought) than it was to write their own. Thus, the NIH syndrome began. Because a certain technology, library, API, or framework wasn't invented by a member of the developer team, and therefore under their entire control, it needed to be written from scratch.
In the early days, this wasn't so bad. Writing a linked list implementation was indeed quicker than trying to download, install, and understand someone else's linked list implementation (for most people). These libraries grew to millions of lines of code, and hundreds of person-hours worth of work, but NIH continued. Language frameworks and runtimes became more popular. C++'s STL, Java, .NET, and so on, included standard algorithms (framework) and abstractions to interface with underlying operating systems (runtimes), so it has become harder to ignore these libraries and write everything from scratch. But the sheer magnitude of the detail and complexity of these libraries was difficult to grasp given the detail of the documentation. In order to better utilize these libraries and frameworks, information on how to use them began to be shared. Things like best practices made it easier for teams to accept third-party libraries and frameworks. Lessons learned were being communicated within the community as best practices. "I spent 3 days reading documentation and tweaking code to perform operation Y, here's how I did it" became common.
Practices are a form of componentization. We don't actually get the component, but we get instructions on where, why, and how to make our own component. It can help us keep our software structured and componentized.
Some methodologies from other disciplines have recently begun to be re-used in the software industry. Some of that has come from lean manufacturing, such as kaizen, and some from the martial arts, such as katas. Let's have a brief look at using these two methodologies.
In the martial arts, students perform what are known as katas. These are essentially choreographed movements that the student is to master. Students master these katas through repetition or practice. Depending on the type of martial art, students move through dan grades through judgment of how well they can perform certain katas.
The principle behind kata is the muscle memory. As students become proficient in each kata the movements become second nature to them, and they can be performed without thought. The idea is that in battle the muscle memory gained from the katas would become reflexive and they would be more successful.
In the software development communities, kata-like sessions have become common. Developers take on specific tasks to be done in software. One way is to learn how to do that task, another is to repeat it as a way of remembering how to implement that specific algorithm or technique. The theory is that once you've done it at least once you've got "muscle memory" for that particular task. Worst-case is that you now have experience in that particular task.
"Kata" suffers slightly from the same syndrome as "best practice", in that "kata" isn't necessarily the most appropriate term for what is described previously. Getting better at a practice through repeated implementation results in working code. Kata is repeating movement not necessarily so the movement will be repeated in combat/competition, but so that your mind and body have experience with many moves that it will better react when needed. Software katas could be better viewed as kumites ("sparring" with code resulting in specific outcomes) or kihons (performing atomic movements like punches or kicks). But this is what coding katas have come to signify based on a rudimentary understanding of "kata" and the coding exercises being applied.
At one level, you can view practices as katas. You can implement them as is, repeating them to improve proficiency and experience, the more you practice. At another level, you could consider these practices as a part, or start, of your library of practices.
In the past few years, there has been much in the way of process improvement in the software industry that has been taken from Japanese business and social practices. Much like kata, kaizen is another adopted word in some circles of software development. Originally borrowed from the lean manufacturing principles, lean manufacturing was originally attributed to Toyota. Kaizen, in Japanese means "improvement."
This book does not attempt to document a series of recipes, but rather a series of starting points for improvement. Each practice is simply one way of eliminating waste. At the most shallow-level, each practice illuminates the waste of trying to find a way to produce the same results as the detailed practice. In the spirit of kaizen, think of each practice as a starting point. A starting point not only to improve yourself and your knowledge, but to improve the practice as well.
Once you're comfortable with practices and have a few under your belt, you should be able to start recognizing practices in some of the libraries or frameworks you're using or developing. If you're on a development team that has its own framework or library, consider sharing what you've learned about the library in a series of recommended practices.
How would you start with something like this? Well, recommended practices are based on people's experience, so start with your experiences with a given framework or library. If you've got some experience with a given library, you've probably noticed certain things you've had to do repeatedly. As you've repeated doing certain tasks, you've probably built up certain ways of doing it that are more correct than others. It has evolved over time to get better. Start with documenting what you've learned, and how that has resulted in something that you'd be willing to recommend to someone else as a way of accomplishing a certain task.
It's one thing to accept practices to allow you to focus on the value-added of the project you're working on. It's another to build on that knowledge and build libraries of practices, improving, organizing, and potentially sharing practices.
At one level, a practice can simply be a recipe. This is often acceptable, "Just Do It" this way. Sometimes it might not be obvious why a practice is implemented in a certain way. Including motivators or the impetus behind why the practice is the way it is can be helpful not only to people learning the practice, but also to people already skilled in that area of technology. People with skills can then open a dialog to provide feedback and begin collaborating on evolving practices.
Okay, but really, what is a "best practice?" Wikipedia defines it as:
"...a method or technique that has consistently shown results superior to those achieved with other means...".
The only flaw in this definition is when there's only one way to achieve certain results, it can't still be "best" without being compared to some other means. "...method or technique" leaves it pretty open to interpretation on whether something could be construed as a best practice. If we take these basic truths, and expand on them, we can derive a way to communicate recommended practices.
The technique or method is pretty straightforward (although ambiguous to a certain degree). That really just distills down to a procedure or a list of steps. This is great if we want to perform or implement the practice, but, what do we need to communicate the procedure, intent, impetus, and context?
I could have easily jumped into using practices first, but, one of the points I'm trying to get across here is the contextual nature of practices whether they're referred to as "best practices" or not. I think it's important to put some sort of thought into the use of a practice before using it. So, let's look at evaluation first.
Once we define a practice we need a way for others to evaluate it. In order to evaluate practices, an ability to browse or discover them is needed.
In order for someone else to evaluate one of our practices, we need to provide the expected context. This will allow them to compare their context with the expected context to decide if the practice is even applicable.
In order for us to evaluate the applicability of another process, we need to know our context. This is an important point that almost everyone misses when accepting "best practices." The "best" implies there's no need for evaluation, it's "best", right? Once you can define what your context means you can better evaluate whether the practice is right for you, whether it can still be used but with a little evolution, or simply isn't right for you.
Documenting a practice is an attempt at communicating that practice. To a certain degree, written or diagrammatic documentation suffers from an impedance mismatch. We simply don't have the same flexibility in those types of communication that we do in face-to-face or spoken communication. The practice isn't just about the steps involved or the required result, it's about the context in which it should be used.
I have yet to find a "standard" way to documenting practices. We can pull some of what we've learned from patterns and devise a more acceptable way of communicating practices. We must first start out with the context in which the practice is intended to be used, or the context in which the required outcome applies.
Scott Ambler provides some criteria for providing context about teams that can help a team evaluate or define their context. These factors are part of what Ambler calls Agile Scaling Model (ASM). The model is clearly agile-slanted, but many of the factors apply to any team. These factors are discussed next.
This involves the distribution of the team. Is the team co-located or are they distributed over some geographic location? This distribution could be as small as cubes separated by other teams, team members separated by floors, team members in different buildings, cities, or countries and time zones. A practice that assumes the context is a co-located team might be more difficult to implement with a globally-distributed team. Scrum stand-ups is an example. Scrum stand-ups are very short meetings, held usually once a day, where everyone on the team participates to communicate what they have worked on, what they are working on, and any roadblocks. Clearly, it would be hard to do a "stand up" with a team geographically distributed across ten time zones.
Team size is fairly obvious and can be related to geographic distribution (smaller teams are less likely to be very geographically distributed). Although different from geographic distribution, similar contextual issues arise.
Many companies are burdened with complying with regulatory mandates. Public companies in the United States, for example, need to abide by Sarbanes-Oxley. This basically defines reporting, auditing, and responsibilities an organization must implement. Applicability of practices involving audit or reporting of data, transactions, customer information, and so on, may be impacted by such regulations.
Domain complexity involves the complexity of a problem the software is trying to solve. If the problem domain is simple, certain best practices might not be applicable. A calculator application, for example, may not need to employ domain-driven design (DDD) because the extra overhead to manage domain complexity may be more complex than the domain itself. Whereas a domain to manage an insurance domain may be so complex that using DDD will have partitioned the domain complexity and would make it easier to manage and understand.
Similar to team distribution, organizational distribution relates to the geographic distribution of the entire organization. Your team may be co-located but the actual organization may be global. An example of where a globally-distributed company may impact the viability of a practice could be the location of the IT department. If a particular practice involves drastically changing or adding to IT infrastructure, the friction or push back to implementing this practice may outweigh the benefit.
Technical complexity can be related to domain complexity, but really involves the actual technical implementation of the system. Simple domain complexity could be implemented in a distributed environment using multiple subsystems and systems, some of which could be legacy systems. While the domain may be simple, the technical complexity is high. For example, practices involving managing a legacy system or code would not be applicable in a greenfield project where there are yet to be any legacy systems or code.
Organizational complexity can be related to organizational distribution but is generally independent. It's independent for our purposes of evaluating a practice. For example, in a complex organization with double-digit management levels, it may be easier to re-use hardware than it is to purchase new hardware. Practices that involve partitioning work amongst multiple systems (scaling out) may be more applicable than best practices that involve scaling up.
Some enterprises have teams that drive their own discipline, and some enterprises have consistent discipline across the enterprise, not just the software development effort. Practices that are grounded in engineering disciplines may be easier to implement in enterprises that are already very disciplined.
Some projects span a larger life cycle than others. Enterprise applications, for example, often span from conception to IT delivery and maintenance. Practices that are geared towards an off-the-shelf model of delivery (where deployment and maintenance is done by the customer) and ignore the enterprise-specific aspects of the project, may be counterproductive in a full life-cycle scope.