Think big about data: Archaeology and the Big Data challenge

  • Gabriele Gattiglia (Author)

Identifiers (Article)

Abstract

Usually defined as high volume, high velocity, and/or high variety data, Big Data permit us to learn things that we could not comprehend using smaller amounts of data, thanks to the empowerment provided by software, hardware and algorithms. This requires a novel archaeological approach: to use a lot of data; to accept messiness; to move from causation to correlation. Do the imperfections of archaeological data preclude this approach? Or are archaeological data perfect because they are messy and difficult to structure? Normally archaeology deals with the complexity of large datasets, fragmentary data, data from a variety of sources and disciplines, rarely in the same format or scale. If so, is archaeology ready to work more with data-driven research, to accept predictive and probabilistic techniques? Big Data inform, rather than explain, they expose patterns for archaeological interpretation, they are a resource and a tool: data mining, text mining, data visualisations, quantitative methods, image processing etc. can help us to understand complex archaeological information. Nonetheless, however seductive Big Data appear, we cannot ignore the problems, such as the risk of considering that data = truth, and intellectual property and ethical issues. Rather, we must adopt this technology with an appreciation of its power but also of its limitations.

Statistics

loading

Comments on this article

Published
2015-12-07
Language
en
Keywords
Big Data, datafication, data-led research, correlation, predictive modelling, Open Data, EAA 2014