Funding for the Methods Network ended March 31st 2008. The website will be preserved in its current state.

Making 3D Visual Research Outcomes Transparent Abstracts

Richard Beacham, Professor of Digital Culture, Kings Visualisation Lab: Making space: caught between the monster and the wall

The focus of this Symposium is upon standards and transparency in the deployment of 3D modelling as an historiographical method, in which its international participants are leaders. Specifically we wish to discuss and identify how best to document both the process and outcomes of this type of research in such a manner that other scholars can fully understand and rigorously evaluate them, enabling such methods to acquire greater recognition and standing in the scholarly community, and driving up standards of such work throughout the academic and cultural heritage sectors. It aspires to be somewhat different from other types of symposia. In addition to exchanging in the usual manner fascinating information about colleagues’ work, it aims above all to inform the drafting of a guidelines document. We hope this will significantly assist in providing the basis for future standards and methodologies in our fields, both enhancing the quality of the actual modelling process, and in establishing minimum levels of documentation necessary for users critically to assess visualisation-based research processes. An objective is to identify and disseminate the choices and decisions that occur during the complex process of modelling, which may include the reasons for choices made, as well indications of possible alternative hypotheses.

This paper will consider some of the issues relevant to the problems upon which the Symposium will be focussing by using, as a challenging and unusually complex case-study, some of the work which the KVL has conducted on Roman (Pompeian) wall paintings. Our 3D reconstructions in this project are computer-based visualisations derived from the ancient artists’ own attempts to visualise things as images fashioned upon domestic walls, and these approaches to visualisation – both ancient and modern – illuminate key issues and problems which it is the aim of the Symposium to address.

Drew Baker, Senior Research Fellow, Kings Visualisation Lab: Visual Based Research - The need for transparency

Felix Mendelssohn the composer said "music is not too indefinite for words, but too definite." Similarly Data Objects, objects about which data is held, have often been considered to be too vague and have been pinned down to specific and objective categories through the use of metadata. This paper proposes that there exists a parallel stream of ancillary information to metadata which is generated as part of a visualisation-based research process, and which it is necessary to document and disseminate alongside the visual research outcomes.

This "paradata" the paper argues is essential to understanding and building successful and transparent research hypotheses and conclusions, particularly in areas where data is questionable, incomplete or conflicting and explores how this can be applied to the process of creating three dimensional compute

Sorin Hermon, Senior Researcher, Vast-Lab, PIN scrl: 3D Visualization as a Research Tool in Archaeology

Three dimensional (3D) modelling and virtual reconstruction (VR) of archaeological features are common tools of communicating Cultural Heritage, especially for the wide public; archaeological parks, museums or websites dedicated to Cultural Heritage often display virtual 3D artefacts, structures or landscapes, enhancing the visitors' comprehension of the past. However, the potential contribution of 3D and VR to the archaeological research is commonly neglected by the archaeological community, which often views the process of building a 3D model as a stage apart from the common research pipeline, a stage designated for merely presenting to the public in a fashionably, attractive way, the archaeological results. One of the more common critics raised by archaeologists is that 3D models are a closed box, with no possibility of evaluation and often without a particular aim, the emphasis being on computer graphics and artistic aspects, rather than on the wish to solve a particular archaeological scientific problem. The paper discusses this trend, and suggests possible approaches to integrate 3D modelling into the archaeological research methodology, by describing some validation methods of the 3D models, which will allow their de-construction and critic evaluation. Moreover, the concept of "contingency threshold" will be introduced, which will allow the visualization of a mathematical quantification of the 3D model's credibility.

Franco Niccolucci, Vast-Lab, PIN scrl: Documenting the process of archaeological interpretation and reconstruction: a quasi-post-processual approach

Previous work on the subject by the author and his colleagues has concentrated on the issue of subjectivity and uncertainty of archaeological reconstructions, and how these flaws are magically deleted when entering a computer. We showed in previous work that archaeological databases and artefact classification might benefit from the awareness of subjective judgement and imperfect knowledge, and endeavoured to adapt computerized tools to take into account such features. However, we stated that usual computer tools are suitable for everyday practice. This is possibly not the case for virtual reconstructions, where the process of interpretation/reconstruction ("recensio, examinatio, and divinatio") is, as yet, almost always undocumented.

Strangely enough, for reconstructions scholars accept on paper what they do not accept on computers, perhaps because in the latter case it is easier to criticise pretty images, spectacularization and the "absence of the aura of the real" (as confusedly stated by an Italian Cultural Heritage VIP speaking of a recent exhibition of virtual reconstructions of Rome).

Computers, on the contrary, push for more precise information on how things are done. So, in the end, they help reflecting on the archaeological methodology, and this will be the focus of the lecture. Starting from the paraphrase of a famous statement "nihil est in computer quod non fuerit prius in intellectu" we believe that one has to backtrack all the steps leading to a (mental) model of the past to search for methods tha provide credibility to computer reconstructions.

For this purpose, we are going to use a sort of laboratory case, although a real one. It is the funerary mausoleum of Porsenna, an Etruscan monument of which the only remain is a detailed description by Pliny the Elder in his Naturalis Historia. This monument is a favourite topic in our activity because it fits very well with our testing needs, as guinea pigs or white mice.

In the lecture, the reconstruction process will be disassembled and re-assembled step by step. The process will (quasi-post-processually) be accompanied by the statement of all doubts and uncertainties, duly recorded and inserted in the computer reconstruction to be carried on in parallel.

It is hoped that this simplified case-study may serve as a model for real reconstruction cases, and provides a draft guide for such exercises. Additionally, international standards will be used for creating the model and to insert the additional information we propose to use, another option we do hope scholars will consider of the highest importance.

Kate Devlin, Department of archaeology and Anthropology, University of Bristol: Just how predictable is predictive lighting?

Predictive lighting refers to the use of computer modelling software to accurately simulate the behaviour of light, resulting in a virtual scene that physically represents the real world in terms of illumination. Predictive lighting has been used in areas such as architectural simulations and forensic reconstructions, and also in the representation of archaeological sites and artefacts with the aim of depicting the environment as it would have looked to an observer in the past. Leaving aside the issues of the representation itself, the use of predictive lighting has its own limitations. While we can simulate lighting values and their distribution in a scene, we cannot yet say with complete confidence that we have achieved a perceptual match between what people see when they look at a computer model and what they would see in a real-world equivalent. This is due to factors such as display restrictions and aspects of the human visual system such as colour and brightness perception which affect our interpretation of the output images. This paper will discuss how we use predictive lighting to present a visual interpretation of the past, and how we might address problematic areas in order to achieve a more objective visualisation of the illumination of past environments.

Donald Sanders, President, Institute for the Visualization of History, Williamstown MA: More than pretty pictures of the past: an American perspective on Virtual Heritage

Since the early days of virtual heritage, simply shaded massing models have given way to complexly lit and detailed virtual worlds. Yet, we are still not where we should be in many aspects of our results, and how we do what we do is still a mystery to many. My presentation will touch on: (1) how archaeology traditionally deals with the evidence trail, with special focus on the use of images as documentation; (2) how digital archaeology has changed the rules and how the discipline is trying to cope; and (3) how virtual heritage projects can solve many problems relating to data trails, comparing the evidence to the outcome; to have virtual worlds become visual indexes to all the information, and thus more than pretty pictures of the past. I will illustrate that last point with some of the projects my companies have been involved with over the last decade.

Martin Turner, Manchester Computing, The University of Manchester: Lies, Damn Lies and Visualizations - will Metadata be a Solution or a Curse?

Visualizations have the immense power to convince and illustrate and at times enable users to gain a higher level of insight and inspiration. Based upon the massive amount of brain power within the human visual system, constituting about 1/3 of the total brain size, visualizations have been shown to be one of the best and sometimes only way of conveying a huge amount of data as quickly as possible. Their use has been proved on countless examples, but they can also confuse, deceive and even lie. These deceptions can be both accidental and at times throughout history possibly deliberate.

It is said that a picture describes a thousand words, but actually to quote W.Terry Hewitt a 'good visualization often requires a thousand words to describe it'. When teaching good scientific visualization techniques a common tool used is to describe a seminal publication by Al Globus and Eric Raible in 1994 that teaches the opposite; the top 14 ways to enable one to say nothing with a scientific visualization. Throughout the last decade three new philosophies have emerged, the role of e-Science allowing for the creation of tools for metadata to be connected with both outputs and source data; the development of the ideas of the Semantic Web as described in the vision by Tim Berners-Lee, James Hendler and Ora Lassila; and the construction of ontology description including ideas directly related for visualizations.

It will be shown that these 'ways of saying nothing or lying' are universal to many visualization, and with new tools they may have to be re-written. There is also a word of caution as the true complexity of accurately describing meta- or paradata is a potentially unsolvable problem.