A growing number of people is turning to Social Media in the aftermath of emergencies to search and publish critical and up to date information. Retrieval and exploitation of such information may prove crucial to decision makers in order to minimize the impact of disasters on the population and the infrastructures. Yet, to date, the task of the automatic assessment of the consequences of disasters has received little to no attention. Our work aims to bridge this gap, merging the theory behind statistical learning and predictive models with the data behind social media. Here we investigate the exploitation of Twitter data for the improvement of earthquake emergency management. We adopt a set of predictive linear models and evaluate their ability to map the intensity of worldwide earthquakes. The models build on a dataset of almost 5 million tweets and more than 7,000 globally distributed earthquakes. We run and discuss diagnostic tests and simulations on generated models to assess their significance and avoid overfitting. Finally we deal with the interpretation of the relations uncovered by the linear models and we conclude by illustrating how findings reported in this work can be leveraged by existing emergency management systems. Overall results show the effectiveness of the proposed techniques and allow to obtain an estimation of the earthquake intensity far earlier than conventional methods do. The employment of the proposed solutions can help understand scenarios where damage actually occurred in order to define where to concentrate the rescue teams and organize a prompt emergency response.