Our goal is to enable robots to better assist people with motor impairments in day-to-day tasks. Currently, such robots are teleoperated, which is tedious. It requires carefully maneuvering the robot by providing input through some interface. This is further complicated because most tasks are filled with constraints, e.g. on how much the end effector can tilt before the glass that the robot is carrying spills. Satisfying these constraints can be difficult or even impossible with the latency, bandwidth, and resolution of the input interface. We seek to make operating these robots more efficient and reduce cognitive load on the operator. Given that manipulation research is not advanced enough to make these robots autonomous in the near term, achieving this goal requires finding aspects of these tasks that are difficult for human operators to achieve, but easy to automate with current capabilities. We propose constraints are the key: maintaining task constraints is the most difficult part of the task for operators, yet it is easy to do autonomously. We introduce a method for inferring constraints from operator input, along with a confidence-based way of assisting the user in maintaining them, and evaluate in a user study.