Socially intelligent robots are a priority for large manufacturing companies that want to deploy collaborative robots in many countries around the world. This paper presents an approach to robot motion generation in which a human demonstration is imitated, collisions are avoided, and a "style" is applied to subtly modify the feasible motion. The framework integrates three subsystems to create a holistic method that navigates the trade-off between form and function. The first subsystem uses depth camera information to track a human skeleton and create a low dimensional motion model. The second subsystem applies these angles to a simulated UR3 robot, modifying them to produce a feasible trajectory. The generated trajectory avoids physically infeasible configurations and collisions with the environment, while remaining as close to the original demonstration as possible. The final subsystem applies four style parameters, based on prior work using Laban Effort Factors, to endow the trajectory with a specific "style". This approach creates adaptive robot behavior in which one human demonstration can result in many subtly different robot motions. The effectiveness of the hybrid approach, which considers functional as well as expressive goals, is demonstrated on three environments of increasing clutter. As expected, in more cluttered environments, the desired imitation is not as pronounced as in unconstrained environments. Potential applications of this framework include programming robot motion on a factory floor with greater efficiency as well as creating feasible motion on multiple robots with a single demonstration. This quantitative work highlights the Function/Expression duality named in the Laban/Bartenieff Movement System, illuminating how the arts are critical for "practical" spaces like the factory.