Now that we have walked through the basics in my posts on Testing 101, 200 Level, and 300 Level testing, the number of bugs in your production environment should be falling. To continue to eliminate the nasty bugs, it is time to dig deep, get creative, and use skills from other fields. Here are some out-of-the-box ideas to continue the momentum and focus on quality and stability of your software products.
- Shift Left – this is the general principle to find bugs as soon as possible in the software development process – write automation tests to find bugs in QA, write unit tests to find bugs in development, use TDD to find bugs before you write code, architect testability into your platform before you even create the solution. TechBeacon has a great article about Shift Left.
- Crowdsource Testing – sometimes you need to get a fresh, new perspective on testing. Using the general public can help you get there. I have used a company called uTest, and they were great. They have all kinds of purchase options. There are lots of other crowdsource testing companies to choose from.
- Fuzzing – I have done this before and never knew it was called fuzzing. Build automation tests that randomly generate characters (ISO-8859-1 and UTF-8) and submit forms, to test positive and negative cases you never thought of. Run it thousands of times, and you are bound to find some errors. Read F-Secure to learn more about fuzzing.
- Innovation – When most people think of innovation, they think of a bolt of lightning, striking randomly. While this may be true some of the time, that isn’t always the case. There are methods to developing and trying new ideas, and that kind of innovation needs a culture to support it. This great article from The Medium outlines some of the methodologies to use to find innovation.
- Design Thinking – Over the last few years, I have been working with Rutgers on their Advisory Board for their Design Thinking certificate program. I stay on top of the subject, and tweet a lot of great articles to read on the topic. When I came across this article on DT and Software Quality, I knew I needed to share it.
- Bug Bash / Bug Hunt / Bug Day – Make finding bugs a social event. Invite your team, or other teams, to find bugs in your product. Set aside an afternoon, order some pizza, plan some prizes, and find some bugs! Read more about how to run a Bug Hunt.
- Quality Hackathon – another way to generate ideas about new approaches to testing and quality, you could hold a hackathon, one that focuses specifically on testing. Are there new ways to automate? New tools to try? New approaches to testing? Challenge your team, set some parameters, pick some judges, give away some prizes, and make it fun!
- Apply Gamification to Testing – In World of Warcraft, there are monsters to kill, skills to learn, reputation to gain, equipment to discover. You can do the same thing with Quality and Gamification – each person on the team has a QE Score, and gets points for bugs found, classes taken, Tech Talks conducted, junior teammates mentored. Define a reward system, how to grow, and publicly praise the growth – different levels, badges, and achievements to earn and the steps to get there. Read more about gamification on CIO.com.
- Machine Learning and Testing – Can we teach a machine to find bugs in software that humans write? Or identify what sections of our software systems will most likely have the most bugs? Or what sections of code are the most complex? Maybe we should give it a shot…
- Measure your Test Maturity – According to CIO Magazine, “The Capability Maturity Model Integration (CMMI) is a process and behavioral model that helps organizations streamline process improvement and encourage productive, efficient behaviors that decrease risks in software, product and service development.” Initially developed by the Software Engineering Institute at Carnegie Mellon University, in conjunction with the DoD and U.S. Government, CMMI is currently administered by the CMMI Institute as a process improvement tool for projects, divisions or organizations. CMM defines 5 levels of maturity – (1) Initial, (2) Repeatable, (3) Defined, (4) Managed, and (5) Optimizing. If thought of in a different way, this maturity model could be specifically applied to software quality and testing, and used to measure current state and set goals for improvement.
- Think Different – While this may be the slogan for a famous ad campaign for a large tech company named after Newton’s fruit, it is also a technique to help people solve problems by taking a different mental approach – Creative, Analytical, Critical. Concrete, Abstract, Divergent, Convergent, Sequential, and Holistic thinking. Read more about software testing and different thinking types on testingexcellence.com.
- Complexity = Errors – Exploit the correlation between complexity and errors. A rule of thumb is that one component should do one thing, and do it well. Make complex classes smaller. Break monolithic services into micro services. Break database tables into multiple, smaller tables. The same is also true for tests. Make unit tests smaller. Test smaller sections of the user journey. And aim your spotlight of testing onto complex areas of your application – UI, services, or data, integration points, service signatures, common failure points.
- Wear different Thinking Caps – It’s a common enough expression… “put on your thinking cap.” The 1985 book Six Thinking Hats by Edward de Bono promotes multi-dimensional thinking by mapping different colored hats to different cognitive styles – blue for an overview, white for facts, red for emotion, yellow for optimism, black for risks, green for creativity. Put on different thinking caps, and innovate in whole new ways.
- World Quality Report – “The World Quality Report is the only global report analyzing software testing and quality engineering trends. It presents an analysis of developments in agile and DevOps, artificial intelligence, automation, test environments, data, security and budgets, showing once again the importance of quality, and of the measures that are put in place to maintain it.” – sogeti (part of capgemini)
- Online Resources – Things change, evolve, improve. It is our obligation as technologists to stay on top of those changes. There are a lot of resources that can help you do that. Training, conferences, certifications, and industry trends are some ways to stay on top of those changes. Sometimes it is just staying on tho of the news as it happens. While doing research for my blog posts, some of the best resources for me were Ministry of Testing, Software Testing Help, StickyMinds, InfoWorld, and Software Testing Magazine.
- Training – Continuing your education as a technologist and quality engineer is critical. There are lots of online video training resources available online. Pick your favorite and keep learning. Two of my favorite are Udemy (and their class on Software Testing and Innovation): and Lynda (and their class on Exploratory Testing).
- Conferences – Learn about new techniques, hear from industry experts, plan for future innovation, look at products from vendors, continue your education. Check out a list of conferences from TechBeacon and testingconferences.org .
- Certifications – Certifications are more than just a test and a score. They are managed by a company with resources, continuing education, advanced certifications, and a community of professionals you can tap into. Check out a list of certifications for Software Testing professionals.
- Industry Trends – Remain on top of industry trends allows you to adapt to current challenges, leverage new tools and techniques, and benefit your skills and your company. Read more about quality assurance trends in 2019.
- Beyond The Bugs – What I have focused on in these last four posts is how to find bugs in your code. Ones that would functionally prevent users from using your site. But there are lots of non-functional ways to test your site. Security testing, penetration testing, performance testing, reliability testing, efficiency testing, maintainability testing, portability testing.. and plenty more I am sure. Keep on expanding your knowledge, wide and deep.
In the list above, I just scratch the surface on each one of these ideas. Spend time researching each of them, dig deeper into the ideas, or come up with your own. Have an idea, technique, or best practice I haven’t covered? Leave me some comments and let me know.
Leveraging a process is the basis to defining and executing a test strategy. This allows the development team to focus on repeatability, stability, speed, and results. While researching the landscape of test design techniques, I came across three very good articles that outline a clearly defined set of test design strategies (articles from Invensis, Art of Testing, and Test Automation Resources). These articles outline techniques based on static code and compiled dynamic code; manual and automated testing; black box, white box, and experience based testing. Below is a breakdown of each of the test design techniques. Some of these have been discussed in my articles on Testing 101 and 200 Level Testing. Below is a summary of each, but follow up with each of the original articles to get more details, as well as examples.
Static Test Design Techniques
- Walk through – a formal step by step review of all the features and documentation by the authors to better understand the software.
- Informal review – as stated, these are more informal discussions to gather information without the documentation or code.
- Technical review – more of a peer review of the application.
- Audit – a formal review comparing code to documentation by an external source.
- Inspection – a formal review by trained moderators, documenting defects in code and documentation through a detailed process.
- Management review – a review of the project documents – project plan, budget, metrics, objectives and results, etc.
With Help of Tools
- Analysis of coding standards (using compiler) – comparing the code against a set of rules, conventions, and standards defined within a tool or document.
- Analysis of code metrics – analysis of things like cyclomatic numbers, complexity, nesting, lines of code, code coverage, etc.
- Analysis of code structure – an analysis of the application by following the flow of data or paths through the code. Also analyzes the structure of the data and the code itself.
Dynamic Test Design Techniques
Specification-based or Black-Box techniques
- Boundary Value Analysis – test all field input values at the boundaries – highest, lowest, etc.
- Decision Table Testing – Also called Classification Tree Method, build a decision tree for the logic of the application, and write tests for each of them.
- State Transition Diagrams – test each of the states of the application, particularly workflow steps.
- Equivalence Partitioning – reduce your number of tests by determining ones that test the same thing, and return the same results.
- Use Case Testing – define scenarios based on business functionality or user functionality.
- Combinatorial Testing – Randomly selected values, all possible values, each choice in at least one test, all-pairs or pair-wise or n-wise combinations, etc.
Structure-based or White-Box techniques
- Statement Coverage or Line Coverage – similar t code metrics, analyzing the amount of code that has been exercised by tests.
- Condition Coverage or Predicate Coverage – all conditions (i.e. true or false) are tested.
- Decision Coverage or Branch Coverage – all conditions in each decision table are tested.
- Multiple Condition Coverage – all values in all conditions are tested.
- Exploratory Testing – similar to an informal review, this testing is based on a general understanding of the application, product, domain, company, etc. and the experience and intuition of the tester.
- Error Guessing or Fault Attack – leveraging prior experience and expertise, guess where the cracks are in the application, and focusing the testing there.
How to Choose the Right Technique
Once you have a general understanding of test design techniques, choosing the right approach is the most critical next step. Here are some of the decision points to pick the right one:
- Application Type – based on requirements for the domain as well as mobile vs. web applications.
- Regulatory standards – must follow conventional rules based on IT, countries, government agencies, etc.
- Customer’s requirements– based on relationships or contracts with customers.
- Risk Level and Type – This includes business risk, legal risk, compliance risk, brand risk, etc.
- Objectives – Focus on the objectives of your testing.
- Test Expertise – knowledge of the application, availability of documentation, familiarity with the techniques, etc.
- Time and budget – What will provide the biggest value that fits your schedule.
- SDLC – Waterfall, Agile, Scrum, Kanban, Extreme… each affects which technique will fit.
- Defect History – What kind of bugs have you found already for this app, in other apps, across the domain, etc.
All developers know that they need to test their code. What new developers don’t understand is that the longer a bug exists in their code throughout the software developer lifecycle, the more expensive it costs to fix it. So what do you do? You start with the basic techniques I discussed in my last post. But that is not the end, only the beginning of the quality journey. Your next step is to bake Quality into your process, whether it is waterfall, agile, kanban, lean, etc. You think about quality in every stage. See some examples below of how to include testing throughout each phase of your software development process.
- Architectural Design – Plan your solution out before you write your code. This includes how you will test your application. Sometimes this means writing your functions or methods differently, or creating some test harness code. Plan this out beforehand.
- Code Standards, Conventions, and Style Guides – Set the standards with the team, making code easier to read, modify, test, and predict. Do you want to follow certain naming conventions? Are you a fan of Hungarian Style? Particular about indenting? Does bracket placement matter? How about spacing? Document it all, and stick to it.
- Use your User Personas – I bet at one time or another, you or yours UX department has defined a series of personas to help define and prioritize features. You should do the same for your code – use the journey of your personas to help you define what is critical for testing.
- Unit test your front end code, too – You have probably written lots of unit tests against your service layer and database code. And, you probably have functional tests to exercise all that front end code. But don’t forget to unit test that front end code, too. Libraries to facilitate this are mocha, karma, jasmine, jest, enzyme, selenium webdriver, cypress, puppeteer, protractor, and many more.
- Improve your Definition of Ready -Improving your code quality starts with purpose – what is the objective of your effort? Ensure you have documented enough of your expected outcomes. Just be sure not to take it too far – this could become an agile anti-pattern.
- Improve Acceptance Criteria – Two great mnemonics to improve your acceptance criteria are INVEST and SMART. Read more about them here. Remember the thoroughness of your acceptance criteria directly impacts code quality.
- Peer Code Reviews – One of the best ways to identify bugs as early in the process as possible. Two heads are better than one. Learn from your peers and catch errors early. And here is a great article you can read about peer code reviews.
- Static Code Analysis Tools – Just like your coding standards and style guide, these tools analyze your source code (usually straight from your source code repository), compare your code against a wide range of rules, and help you identify areas that need help. Some examples of popular Static Code Analysis tools are Lint, AppScan, FxCop, StyleCop, Resharper, NUnit, SonarQube, and others.
- Dynamic Code Analysis / Vulnerability Test Tools – These tools are run against code actively running on a server in an attempt to measure resources used, find complexity, identify errors, or uncover vulnerabilities.
- Test Driven Development – Following this methodology, you think about your code first, write test cases that fail, and finally write your code until your tests pass. Then you can write more test cases, or adjust, until you are complete.
- Behavior Driven Development – This is a methodology to define test scenarios using natural language, following a specific pattern, just like the user story. In fact, these scenarios can become your acceptance criteria, and using a tool like Gherkin, can become tests directly by Cucumber. Read more about BDD, and the benefits of BDD and TDD.
- Refactor – Just as much an art as a science, refactoring your code helps you to reorganize, simplify, provide focus and improve.
- Test Data, Test Data, Test Data – Sometimes, the key to testing all your scenarios is setting up your test data. Setting up all your automated test scenarios? Need an existing user to test a new feature? Need to mock results from a downstream system? Running a load test? You need data. Plan this out just as diligently as you would your test scenarios.
- Run your tests in all environments – Can you run your tests on your local machine? In Dev, Test, and Production? Both Green and Blue environments? Can you test all your downstream systems too? Is there data to be purged, or added to the environment? Need a flag to indicate this is test data? Plan this ahead – with your infrastructure teams, your partner systems, databases, and product teams to make this happen.
- Plan your Rollback before you need to – Not every release is a success. Faced with a production issue, you have to make the tough decision… roll back, or forward fix? What if you used Flagr, your entire feature was behind a feature flag, and could be turned off at the flip of a switch? What if you deployed using a blue / green strategy, and could roll back by flipping environments? Or kept a library of deployed code to re-implement at a moment’s notice? Plan your strategy ahead of time.
My next blog post will cover test strategies and techniques that can help you better define your test cases, and approach your code in unique, thorough, and thoughtful ways.
Things certainly have changed. I remember a day when testing meant writing up a set of scenarios to manually test, or having a checklist of features to repeatedly test against on deployment day. Now, with modern tools, there are lots more ways to test your code. Automation can repeatedly test those scenarios for you, and ensure you didn’t break your code when new changes are made. You can use these automation tests in lots of different ways, along with some basic things to keep in mind to ensure those tests are as valuable as possible.
Basic Types of Automation
- Unit Testing with Code Coverage – Write reusable test cases that test your functions, methods, and classes. Integrate them into your automated build pipeline, and measure how much of your code is tested. Aim for 100%, but expect a bit less. Some unit testing tools are jUnit, nUnit, CA LISA, Rhino Mocks, and many more.
- Functional / System Testing – Write tests from your customer’s perspective. walk through your application in the browser, fill in the fields with predefined values, and submit. Test if the data is correct, if the validations worked, if the data was submitted. Common tools for this are Selenium, Cucumber, CA Lisa, Rhino Mocks, etc.
- Browser and Device Testing – The Browser Wars may be over, but we still have a fragmented landscape of browsers used, and you have to test them all – Chrome, Edge, IE, Safari, Firefox, Android, iOS, phones, tablets, desktops… To cover them all, you can automate your tests and integrate with tools like BrowserStack, SauceLabs, Selenium, etc.
- Beta / Acceptance Testing – You need to ensure that you have built a quality project that meets the requirements of the business and the customer. This is usually a set of manual tests completed by your customer before and / or after launch.
- Install Testing – This subset of existing automation tests ensures you exercise as much of your codebase as possible in a new environment, particularly production, to ensure your server installation has completed properly.
- Smoke Testing – A small subset of existing automation tests, most probably used in other cases, that ensures any environment changes have not had an impact on your codebase and its functionality. These tests usually follow the happy paths (or red routes).
- Regression Testing – a complete and thorough suite of tests that exercise all of your code, new features and old, happy path and all known edge cases, including past and present bugs, ensuring you are prepared for a launch to production.
- Performance / Load / Stress Tests – Identify break points and thresholds in your software and hardware, and improve. Common tools include jMeter, LoadRunner, Silk Performer, and many others.
- Security / Penetration tests – Make your code more secure, prevent hacking and attacks. Tools include SonarQube, OWASP ZAP, Nexpose, WireShark, Retina, Aquila, and many more.
Tips To Keep In Mind
- Keep your code simple – Keep your code focused on solving one problem at a time. This makes it easier to read and understand, code correctly, debug, and maintain.
- Pair Programming – Two heads are better than one. Write your code cleaner, faster, and with less bugs. This method comes with built in code reviews.
- Use Design Patterns and Architecture Patterns – Follow these, and you are using proven methods that other developers will understand. The Gang of Four is a good place to start learning design patterns, and there are many others. MVC, MVP, MVVM, and VIPER are some common architecture patterns that you can use to organize your code at an application level.
- Comment and Document your Code – Do this as close to the code as possible. This makes your code easier to read and understand. One way I like to do this is write pseudocode first, then fill in with actual code. Then the pseudocode becomes natural code commenting.
- Automated Builds and Continuous Integration – Run your code and test cases as often as possible, make sure they all work together. Jenkins is the most common way to achieve this. Another common tool is Bamboo.
- Demos – this forces you to think holistically about your code, how you will communicate your solution to others, and gather feedback from your audience. If you follow an agile methodology, chances are this is already part of your process.
When you redesign or enhance your site, you make a lot of changes. You change the content, the design, the front end technology, the back end stack, the user flows, the information architecture, everything. It is tough to know what you have done right, and what needs help, particularly as it compares to other sites. These sites can help show you what you have done right, what needs help, and how you compare to other sites. I use them… and so should you.
- https://website.grader.com/ – the gold standard of online web site graders. Shows performance, SEO, mobile capability, and security.
- https://www.semrush.com/ – this site gathers a LOT of marketing information about your site… Monitor this information before and after your cutover.
- https://validator.w3.org/ – Are you W3C Compliant? Are you writing valid HTML? Using this throughout your development will ensure your site is as readable and indexable as possible.
- http://www.webpagetest.org – How long does the first view of my page take? How about the second view? This grader shows you both… just like the Developer Tools in Google Chrome.
- https://developers.google.com/speed/pagespeed/insights/ – another technical site grader that can give you guidance where to increase performance. Be careful trying to get 100/100, though… not everything NEEDS to be done.
- http://nibbler.silktide.com/en_US – Evaluates your site down in four areas – Accessibility, Experience, Marketing, and Technology. Still useful to get another view of your site.
- https://www.woorank.com/ – “Run a review to see how your site can improve across 70+ metrics” – Marketing, SEO, Mobile, Usability, Technology, Crawl Errors, Backlinks, Social, Local, SERP Checker, Visitors.
- http://www.similarweb.com/ – Another great site for a large, corporate web site. But not a lot of information about performance. Good to monitor usage and marketing metrics.
- https://moz.com/researchtools/ose – Moz is known for its SEO tools, and this is an easy dashboard of information to monitor before and after your redesign. The free version is useful, but the Pro version is even better. Not a lot of tech help here, though.
- http://www.alexa.com/ – 7 days for free, the paid version is the only one really useful. Lots of marketing information is available, though.
- http://builtwith.com/ – Very technical. Shows you the infrastructure and software choices made by the development team. You will be surprised. Helpful for technology and information security teams.
- http://www.google.com/analytics – Free analytics tool. Tells you who uses your site, how much, where they are from, what browsers, what time of day… a plethora of information. Including Page Speed.
- https://www.google.com/webmasters/tools – Free tool that shows you what index errors Google has encountered, things to make your site more indexable, and what your pages look like to the Google Search Crawlers. Use this.
- http://www.bing.com/toolbox/webmaster – Everything that Search Console is for Google, this site is for Bing.
So did I miss any tools that you use? Are any of these ones you have struck off your list? How do you measure results of your site before and after? Leave a comment and let me know!
EDIT: Two more sites were recommended to me that help redesign projects, so I am adding them here:
If you are building a web site on an Agile team, you need to find ways to save time. These two checklists will help you with that. The first checklist, for on-page optimization, is helpful when building a new page or significantly modifying an existing one. This is a good set-up for success criteria for a user story or sprint. The second checklist, for on-site optimization, is good for regression testing or stabilization, and is a good baseline for success criteria for the release.
Do you have any feedback? Things you disagree with? Anything I missed? Please leave feedback.
- Readable by a human
- 115 characters or shorter
- shorter URLs are better for usability
- Head Section Order
- Meta tags are in the right order: Title > Description > Keywords.
- these tags are used to render the title and description in the search engine results pages
- Title Tag
- 6 to 12 words , 70 characters or less
- Unique across the site
- Description Tag
- include the most important info and keywords before the SERP cutoff
- approximately 160 characters including spaces.
- make it compelling – don’t want to waste your prime real estate
- Unique across the site
- Keywords Tag
- Even with the controversy of their value, include it as a best practice
- List keywords in order of importance, separated by commas.
- Meta Robots tag
- <meta name=”robots” content=”noindex”>
- NoFollow prop on anchor tags
- View State tag
- Heading Tags
- make sure your first heading tag is an <h1>,and that there is only one on the page.
- Canonical tag
- Helps prevent duplicate content within your site
- rel=”alternate” hreflang=”x”
- Tells Google what language to target for search purposes
- Use page level keywords in your image alt attributes
- Ensure your images have proper descriptions for Accessibility Standards
- Alt attributes are also required to validate your HTML code.
- Ensure file names reflect the content of the image
- Geo Meta Tags
- Overall Word Count
- More than 250 words is recommended,
- Quality content is key.
- avoid duplicate content and thin content
- Dashes vs. Underscores in URLs
- Underscores are alpha characters and do not separate words.
- Dashes (i.e. hyphens) are word separators, but not too many or things could look like spam
- use fully qualified links, i.e. http://www.URL.com
- 100-200 links on a page is a good high end target
- Make sure your link text uses keywords and is relevant
- Ensure the most important part of your page is the first thing the bots crawl.
- externalize code to ensure there aren’t unnecessary lines above the body text.
- Make sure there are no misspellings or grammar mistakes
- Make sure your page is W3C valid HTML
- Last but not least, make sure it is relevant content
- Site Map
- Have an HTML sitemap with every page on it,
- Every page should link to that sitemap page
- Have an XML Sitemap to submit to search engines
- The site map should always have fully qualified URLs.
- Text Navigation
- Fully qualified domain
- 301 redirect from domain.com to www.domain.com
- Make your site available over http and https
- Robots.txt File
- tells the search engine spiders what to index and what not to index.
- Ensure XML sitemaps are listed in the robots.txt file
- Social Sharing
- Make sure they are all set up and working properly
- Web Analytics
- make sure you have it – GA, Omniture, etc.
- Make sure you have only one of each analytics tag on your page
- Ensure your analytics are set up properly – test with Fiddler, firebug, etc.
- Monitor them regularly
- Server Configuration
- Regularly check your server logs, looking for 404 errors, 301 redirects and other errors.
- Privacy Statement
- An important element to Bing. It’s best practices to include one anyway
- Static Pages
- Do not use more than two query string parameters
- use mod_rewrite or ISAPI_rewrite to simplify URLs
- use the Canonical tag.
- Check for Duplicate Content
- check out CopyScape.com . Use it regularly.
- Find and Fix Broken Links
- Google Search
- Home page should appear first
- Track how many pages are indexed
- 301 redirects
- Do not use multiple 301 redirects
- Site wide Uptime
- Cache your site
- Improve Site Speed
- Improve Site Performance
- Compress images
- Minify CSS and JS files
- Set Up a Google Webmaster Tools Account and check it regularly
- Register all versions of your domains and subdomains
- Check Health ad Crawl Errors Reported
- Review Mobile Usability Issues
- Check for Manual Penalties Reported
- Check blocked content
- Ensure CSS and JS is not blocked
- Set up Bing Webmaster Tools as well
SEO Checklist Source URLs
I just got an email about a really cool new tool built into Windows 7 that Microsoft used to debug their new platform. It is called Problem Step Recorder. The best thing to do is to post a snippet of the email right here. I think it says everything perfectly:
“In case you’re not aware of this, here is a little known Microsoft tool bundled with Windows 7 that can be extremely useful to illustrate a problem when testing an application. The diagnostic tool called “Problem Step Recorder” was originally produced by Microsoft during the development of Windows 7 Beta to assist their Quality Assurance team in debugging the OS. It uses a combination of screen captures with mouse tracking to record your actions and can be a great way of describing a problem to others. The program is launched from the Start menu by typing ‘psr’ or ‘psr.exe’ in the search field. You’ll get a floating applet that looks like this: When you hit the Record button, the applet tracks your mouse and keyboard input while taking screenshots that correspond to each new action. When you stop recording your session is saved to an HTML slide show that recreates your steps. It also allows you to add comments to further document the problem. I think it can be very useful as an attachment in [your bug tracking tool] for those hard to describe issues or as a “How To” document for end users.”
Which leads to other ways of doing this… you could youse WebEx or Windows Media Encoder to document any bug as a step-by-step. If you use WatiN, Selenium, or VS2010, you can also use their recorders to document any bugs you may find in a web application, hand that to the dev team, and then there is no guessing how to reproduce the bug.
Kudos to Microsoft, and to the folks who uncovered this!