Observed social networks are often considered as proxies for underlying social networks. The analysis of observed networks oftentimes involves the identification of influential nodes via various centrality metrics. Our work is motivated by recent research on the investigation and design of adversarial attacks on machine learning systems. We apply the concept of adversarial attacks to social networks by studying strategies by which an adversary can minimally perturb the observed network structure to achieve their target function of modifying the ranking of nodes according to centrality measures. This can represent the attempts of an adversary to boost or demote the degree to which others perceive them as influential or powerful. It also allows us to study the impact of adversarial attacks on targets and victims, and to design metrics and security measures that help to identify and mitigate adversarial network attacks. We conduct a series of experiments on synthetic network data to identify attacks that allow the adversarial node to achieve their objective with a single move. We test this approach on different common network topologies and for common centrality metrics. We find that there is a small set of moves that result in the adversary achieving their objective, and this set is smaller for decreasing centrality metrics than for increasing them. These results can help with assessing the robustness of centrality measures. The notion of changing social network data to yield adversarial outcomes has practical implications, e.g., for information diffusion on social media, influence and power dynamics in social systems, and improving network security.