TY - GEN
T1 - MiTV
T2 - 25th IEEE/ACM International Conference on Automated Software Engineering, ASE'10
AU - Taneja, Kunal
AU - Li, Nuo
AU - Marri, Madhuri R.
AU - Xie, Tao
AU - Tillmann, Nikolai
PY - 2010
Y1 - 2010
N2 - User-input validators play an essential role in guarding a web application against application-level attacks. Hence, the security of the web application can be compromised by defective validators. To detect defects in validators, testing is one of the most commonly used methodologies. Testing can be performed by manually writing test inputs and oracles, but this manual process is often laborintensive and ineffective. On the other hand, automated test generators cannot generate test oracles in the absence of specifications, which are often not available in practice. To address this issue in testing validators, we propose a novel approach, called MiTV, that applies Multiple-implementation Testing for Validators, i.e., comparing the behavior of a validator under test with other validators of the same type. These other validators of the same type can be collected from either open or proprietary source code repositories. To show the effectiveness of MiTV, we applied MiTV on 53 different validators (of 6 common types) for web applications. Our results show that MiTV detected real defects in 70% of the validators.
AB - User-input validators play an essential role in guarding a web application against application-level attacks. Hence, the security of the web application can be compromised by defective validators. To detect defects in validators, testing is one of the most commonly used methodologies. Testing can be performed by manually writing test inputs and oracles, but this manual process is often laborintensive and ineffective. On the other hand, automated test generators cannot generate test oracles in the absence of specifications, which are often not available in practice. To address this issue in testing validators, we propose a novel approach, called MiTV, that applies Multiple-implementation Testing for Validators, i.e., comparing the behavior of a validator under test with other validators of the same type. These other validators of the same type can be collected from either open or proprietary source code repositories. To show the effectiveness of MiTV, we applied MiTV on 53 different validators (of 6 common types) for web applications. Our results show that MiTV detected real defects in 70% of the validators.
KW - Reliability
KW - Security
UR - http://www.scopus.com/inward/record.url?scp=78649763596&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=78649763596&partnerID=8YFLogxK
U2 - 10.1145/1858996.1859019
DO - 10.1145/1858996.1859019
M3 - Conference contribution
AN - SCOPUS:78649763596
SN - 9781450301169
T3 - ASE'10 - Proceedings of the IEEE/ACM International Conference on Automated Software Engineering
SP - 131
EP - 134
BT - ASE'10 - Proceedings of the IEEE/ACM International Conference on Automated Software Engineering
Y2 - 20 September 2010 through 24 September 2010
ER -