We discuss collective decision-making and learning capabilities of social networks in the presence of uncertainty. We present a discrete-time decision-making model for a network of agents in an uncertain environment wherein no agent has a model of the environment evolution. The environment impact on the agent network is captured through a sequence of cost functions, where the costs are revealed to the agents after the agents' decision time. The costs include individual agent costs and local-interaction costs incurred by each agent and its neighbors in the social network. In this model, each agent has a default mixed strategy that stays fixed regardless of the state of the environment, and the agent must expend effort when deviating from this strategy in order to alleviate the impact of the uncertain costs coming from the environment. We construct decentralized agent strategies whereby each agent selects its strategy based only on its related costs and the decisions of its neighbors in the network. In this setting, we quantify social learning in terms of regret, which is given by the difference between the realized network performance over a given time horizon and the best performance that could have been achieved in hindsight by a fictitious centralized entity with full knowledge of the environment's evolution.