Automatically creating test cases from statistical models of web application usage is an effective approach to generating test cases that represent actual usage. The models are typically generated from all collected user sessions. In this paper, we consider how grouping the user sessions - specifically by the user's privilege - creates different statistical models and the testing implications of those differences. We performed a study of user-privilege-specific navigation models and the resulting abstract test cases generated from over 19,000 user sessions to four deployed web applications. Our results suggest that grouping user sessions by the users' privileges results in smaller navigation models, which yield realistic test cases that represent users with that privilege well while also exploring navigations not seen in the input user sessions. In some cases, the user-privilege-specific models are significantly smaller, which allows the tester to either (a) generate relatively few test cases and still represent the user type well or (b) create test cases from a less abstract model - without exorbitant model space costs or the need for additional models to generate executable test cases. However, the benefits are not universal for all applications, thus, we present guidance to testers on metrics to determine whether creating user-privilege-specific test cases will be advantageous.