Start networking and exchanging professional insights

Register now or log in to join your professional community.

Follow

Browser compatibility testing, What is the best practice for this kind of testing?

We are currently going though last phase of a WebSphere portal7 upgrade to8 project and user experience is very important for the project to succeeded. Even though, functionality and flow are exactly the same; User interface has changed dramatically.

Manual testing is nearly impossible with the project limitation of resources and time. and online tools -like browser stack http://www.browserstack.com/ - is not an option as it is not approved by the security department. 

The only option we have currently is Test Automation which is time consuming and requires higher capabilities than manual testing.

I  appreciate your valuable innovative inspiring answers

user-image
Question added by Hashem Al Hariri , Test Manager , Saudi Telecom Company
Date Posted: 2015/03/14
Praveen Uppala
by Praveen Uppala , Test Manager , Saguna Consultancy services ltd

Standard multi-browser testing  Standard multi-browser testing is a basic check that the application under test (AUT) supports more than one browser, for example, Internet Explorer, Firefox, Chrome, and Safari. This type of test can run on a single computer, first on one browser and then on another. Alternatively, it can run on different computers. The same test script is typically run on different browsers without modification. It usually assumes that only one user is active during the testing.  Multi-version testing  This is a test that your AUT works on more than one version of a browser, such as Internet Explorer 9 and 10, or all versions of Firefox from 10 onwards. Typically, only one user is active during the testing. The same test script is used for different browsers.  In many cases, the tests must be run on different computers, since some browsers do not allow more than one version to be installed on the same computer at one time.  Concurrent testing  Concurrent testing checks that your AUT works with two or more browsers at the same time. The same user might be logged into different browsers, or different users could be logged in, depending on the aim of the test and the requirements of the AUT.  Aims of the test include:  • Ensuring that there is no unexpected interaction between the browsers  • Checking that the AUT allows a user to be logged in more than once. If a user can be logged in more than once, check that any updates made by one user are reflected in the other browser(s).  • Checking that the AUT prevents a user from logging in more than once, and that the browser behaves correctly when a user attempts to log in twice. Note that some applications will not allow the second login, while other applications will allow the second login and log the first session out.   2 Concurrent testing1 is a generic term that covers the following test types2:  • Single browser concurrent testing—The same browser is used on the same computer at the same time. Most browsers support tabbed browsing, which allows multiple web pages to be opened within a single browser window. Typically, each tab is implemented as a separate process, so single browser concurrent testing can be performed by opening up two tabs within the same browser.  • Single browser distributed concurrent testing—The same browser is used on different computers at the same time.  • Multi-browser concurrent testing—Different browsers are used on the same computer at the same time.  • Multi-browser distributed testing—Different browsers are used on different computers at the same time.  It is also possible to test that the AUT works with two or more different versions of the same browser on the same computer at the same time. This is called multi-version concurrent testing, and will not be addressed further in this document.  Application (or browser) compatibility testing  This is a test that the AUT looks and behaves the same regardless of the browser or browser version used to access it.  Different browsers have different ways of rendering (displaying) the same HTML, so the same page might look different in two different browsers. This test checks that the page looks the same, or at the very least that there are no glaring discrepancies between browsers. This test is often carried out manually.  Typically, the tester will choose one browser to be the “baseline” browser, will ensure that all functional tests work correctly, and will check that each page in the baseline browser is rendered correctly. Once the browser has been established as the baseline, the tester will open up another browser and go through each page, comparing the look of the page to the baseline browser.  Some of the most commonly encountered differences between browsers include:  • Fonts  • Page margins  • Sizes of elements  • Position of elements  • Colors of elements  There are also differences that result from the way that JavaScript is handled by different browsers. These differences can take the form of innocuous differences such as minor layout issues to major differences in behavior. Some examples that we have seen include:  • Events failing to trigger on one of the browsers (such as a drop-down containing States, which is supposed to change according to the selection of a drop-down containing Countries)  • Behavior of drag and drop not working correctly on one of the browsers  1

Joby Raj
by Joby Raj , Manager - Projects and Testing , QBurst

Spoon.net can be a better option. Or there is browser lab from Adobe.

But I would strongly suggest you to do a level of manual testing to ensure if that's working properly in at least one environment and then go for the compatibility stuff.

Test Automation saves your time during execution, but preparation period will be more and expensive than manual efforts.

More Questions Like This