In this Research Work in Progress Paper, we examine the effectiveness of CATME, a tool that implements a criteria-based team formation approach. The tool facilitates forming teams based on criteria like demographics, skills, and work styles. This information is collected from the students via an online survey. The effectiveness of this genre of tool depends on the practicality of the instructor's configuration of the criteria, the veracity of students' responses to the survey, and the soundness of the algorithm. In this work-in-progress paper, we investigate potential issues affecting these factors. Our study was conducted by performing new analysis of data collected from a prior study comparing the performance of teams formed using CATME or randomly in a user interface design course. The performance of teams was not statistically different between the two conditions. In examining the students' responses to the team formation survey, we found issues related to Self-Assessment such as inconsistencies between students' ratings of their skills and reporting of their strongest skills. Likewise, we found some cases where the tool produced unexpected results when calculating the homogeneity of the skills of a team. Implications for instructors to mitigate these problems are discussed.