Wednesday, October 29, 2008

PhD thesis of the Object-Field Model


The full PhD of the Object-Field Model can be accessed from http://vega.soi.city.ac.uk/~fd776/phd/PhD_VoudourisV.pdf OR http://ssrn.com/abstract=1292262

ABSTRACT

The need for a conceptually unifying data model for the representation of geospatial phenomena has already been acknowledged. Recognising that the importance of the data model employed by and large determines what can be done by way of analysis and the methods by which the analysis can be undertaken, there has been some activity in developing unifying data models for geospatial representation in digital form. Some successes have been reported. Nevertheless, progress has been slow, especially at the conceptual and logical levels of abstraction of geospatial data models.

Concepts and ideas from cognitive and perceptual psychology as well as GIScience and GISystems literature are examined within the context of geospatial data modelling and reasoning. Drawing on and combining these concepts, ideas and successes with an empirical approach which proposes generalities by induction, this thesis suggests the fused Object-Field model with uncertainty and semantics at the conceptual, logical and physical levels of abstraction. The logical level has been formalised in the Unified Modelling Language (UML) class diagram and the physical level has been implemented in Java programming language.

The purpose of the Object-Field model is to better support the representation and reasoning of geospatial phenomena, particularly indeterminate phenomena such as town centres and land cover changes. It is shown that many of the concepts required to better represent geospatial phenomena can be derived from a single foundation that is termed the elementary-geoParticle which is regarded as indivisible, has no parts and serves as the standard for integrating the dual continuous-field and discrete-object conceptualisations by means of aggregation. A second concept is introduced, termed Traditional Scientific and Concept spaces of the Object-Field model and shown to provide a useful foundation for collaborative reasoning. The traditional scientific space is a mathematical representation of observational data and the concept space is a representation of conceptualisations, meanings and interpretations of the traditional scientific space. A third concept is also introduced, termed the Hierarchical Uncertainty and Semantic components of the Object-Field model that ‘populates’ the concept space with variable levels of uncertainty and semantics. Sketching is also suggested as a way to represent, record and manage conceptualisation uncertainty as it is an element of uncertainty that is frequently overlooked, yet has a significant impact in the way in which subjects understand and use geospatial data. Given that conceptualisation uncertainty is a subjective process that varies between individuals, this form of uncertainty has particular importance in the collaborative decision-making of indeterminate phenomena.

This thesis constructs technical and theoretical scientific knowledge for the design and development of geospatial models that aim to support the human decision-making process of indeterminate phenomena by means of multiple conceptualisations and interpretations. The theoretical knowledge is embodied in the UML formalization of the Object-Field model and the technical knowledge is embodied in the Object-Field GISystems Prototype.

Friday, October 17, 2008

The Object-Field model with Uncertainty and Semantics


Having completed the PhD (October 15, 2008), I will publish parts of the PhD chapters in the blog next month. I will also make available the Java code of the Object-Field GISystems Prototype.

Sunday, October 12, 2008

Understanding Anasazi using Agent-Based Modelling

'The 1050 Project' has as an output a synthetic agent-based model. It incorporates an empirical landscape with artificial Anasazi agents. Note that the landscape is a reconstruction of seven production zones based on Archaeological and other information in Long House Valley, northeaster Arizona.




More information is included in the chapter 4-6 of the Generative Social Science book by Epstein and others. A Java-based application of the Anasazi can be accessed from http://ascape.sourceforge.net/

Friday, October 10, 2008

SimCity Creator for Wii

There is a game called "SimCity Creator Wii" which enables you to create your own City.



I wonder if these games will help in raising awareness of the complexity of spatial modelling (as Google Earth/Maps seems to do) or they will trivialize this complexity by oversimplifying it.

See also: http://www.freehand.co.uk/games/seera/ by SOUTH EAST ENGLAND REGIONAL ASSEMBLY

Thursday, September 11, 2008

UNDERSTANDING SUSTAINABLE DEVELOPMENT AND URBAN ECONOMIC GROWTH: EXPLORATIONS WITH AN AGENT-BASED LABORATORY



I currently work in the idea that there is no conflict between the concepts of sustainable development and economic growth, as some people suggest.

If we accepts that ‘Sustainable Development’ is a catchall term about intergenerational welfare, then optimal allocation of stock capital can support both sustainability and economic growth. In this case, stock capital includes both environmental capital (such as fossil fuel and clean water) and man-made capital (such as school and hospital).

If the above is accepted, then what is the difference between economic theories of maximisation of welfare and sustainable development? I try to answer these questions within a triad conceptual framework of economy, effectiveness and efficiency, and an spatial agent-based computational laboratory.

But what is 'sustainable development? WCED (1987) and UK Defra (2008) define sustainable development as “development that meets the needs of the present without compromising the ability of future generations to meet their own needs”. Although this definition seems clear, it does not provide a conceptual basis for measuring sustainable development in a systematic way (Beckerman 2003). For example the intra-generational needs of the people coevolve in space and time without necessarily satisfying all the present needs at any point in space and time.

Pezzey (1992) argues that sustainability is related with measures that sustain an improvement in the quality of life, which is also supported by Faucheux et al (1996) who emphasise the need for intergenerational equity in the context of non-negative change in economic welfare per capita. These definitions signal a shift in defining sustainability by promoting the concept of ‘welfare’ as an all-embracing central variable, as argued above. Beckerman (2003) argues since the whole problem is the selections of means towards ‘sustainability’ and the assessment of these means, then the concept of sustainable development has nothing to add (if not subtracting from the classical economic objective of maximisation of welfare because of the precautionary principle).

I try to critically analyze these premises within the ‘triad framework’ (economy, efficiency and effectiveness). Based on this triad framework, I propose ways to measure sustainable development in the context of urban economic growth using a spatial agent-based computational laboratory. I view this laboratory as step forward in addressing what the UK DOE (1996) argues: it is not clear what sustainable development means, thus it is difficult to know how to measure it or which policies promote it.

Saturday, September 06, 2008

COMBINING THE ADVANTAGES OF AGENT-BASED & AND EQUATION-BASED APPROACHES

I am currently working on ways to better understand the interaction (or tension according to some) between sustainable development and economic growth as a means of informing policies about intra- and intergenerational welfare. The approach that i take is to use a fused top-down and bottom-up approach by integrating equation-based and agent-based models.

Bobashev and Epstein (2007) publish a relevant paper: A Hybrid Epidemic Model: Combining the Advantages of Agent-based and Equation-based Approaches (see also Heterogeneity and Network Structure in the Dynamics of Diffusion: Comparing Agent-Based and Differential Equation Models by Rahmandad and Sterman 2006)

Abstract
Agent-based models (ABMs) are powerful in describing structured epidemiological processes involving human behavior and local interaction. The joint behavior of the agents can be very complex and tracking the behavior requires a disciplined approach. At the same time, equationbased models (EBMs) can be more tractable and allow for at least partial analytical insight. However, inadequate representation of the detailed population structure can lead to spurious results, especially when the epidemic process is beginning and individual variation is critical. In this paper, we demonstrate an approach that combines the two modeling paradigms and introduces a hybrid model that starts as agent-based and switches to equation-based after the number of infected individuals is large enough to support a population-averaged approach. This hybrid model can dramatically save computational times and, more fundamentally, allows for the mathematical analysis of emerging structures generated by the ABM.
Details: http://www.brookings.edu/papers/2007/winter_hybridmodel_epstein.aspx


Is this a step towards on how to verify and validate agent-based computational model? (see also Empirical validation of agent-based models)

Thursday, September 04, 2008

Abstracts from the Geospatial Analysis session at RGS-IBG Conference

The "Geospatial Analysis: GIS & Agent-Based Models" session at the RGS-IBG Conference included 5 interesting and ongoing research works which are summarized below.


Note:It is also expected to run the Geospatial Analysis: GIS & Agent-Based Models session next year - RGS-IBG 2009.


ABSTRACTS

Modelling Perceptions of Street Safety to Increase Access to Public Transport; Claire Ellul, Ben Calnan (Cities Institute, London Metropolitan University)

In the context of public transportation, “the provision of a permeable public space contributes to an inclusive journey environment” (Azmin-Fouladi 2007). However, when planning or modelling an urban environment, architectural vision and planning principles often take precedence over the way buildings and urban features make people feel. In particular, the identification of specific urban features that contribute towards a feeling of safety and security is not generally considered.

Our research aims to redress this imbalance by providing planners and local authorities with the means to identify potential barriers to the permeability of public space. It is argued that the removal of negatively-impacting features and the resulting increase in perception of safety will increase the use of public transportation.

We present two key outputs of this process. Firstly, we have developed an Index of Permeabilility (IoP) for the urban environment, where each relevant urban feature visible from a specific location has been assigned a weighting (through a process of consultation). This weighting contributes towards the overall index of permeability for the point. Secondly, we present a GIS-based implementation of this index using Isovists (which identify the urban features visible from a specific point), extending the index to create a surface of permeability.

System wide cultural districts: mapping and clustering the tangible and intangible cultural assets for the policy design of the regional clusters in the Veneto Region, Italy; Pier Luigi Sacco, Guido Ferilli (IUAV University), Massimo Buscema, Terzi Stefano (Semeion Research Center)

In previous research carried out by Sacco et al. the notion of system-wide cultural districts has been introduced and analyzed. In particular, system-wide cultural districts are horizontally integrated local clusters of economic activities in which culture plays a key strategic role as a social activator of innovative processes and practices, as well as an attractor of talent and resources, a factor of social cohesion and of networking, and of course as a sector with its own value added.
In other, related research from the same group, Artificial Neural Networks (ANN) techniques have been adopted to investigate to what degree they were able to single out emergent industrial districts of various kinds in selected areas of the Italian territory.
In this paper, we combine this two strands of research in a project carried out under the initiative of the Veneto Region, one of Italy’s outstanding productive regions. In the first phase of the project, the spatial distribution and clustering of all cultural activities and facilities with a non-occasional character has been mapped. This has led us to identify qualitatively a certain number of emergent culture-driven clustering. In the second phase, a battery of innovative ANN techniques has been employed to identify the ‘centroids’ of the cultural clusters and to check to what extent they overlap with the poles of the Region’s overall productive systems.
Finally, an analogous analysis has been conducted for specific cultural sectors – visual arts, performing arts, museums, and so on, to investigate to what extent they tend to gravitate upon specific cultural clusters and to what extent they are useful to define prospective local specializations by means of a specific policy design process.

Revealing the fuzzy geography of an urban locality; Richard Flemmings (Blom Aerofilms Ltd & Birkbeck University of London)

The delineation of urban geographical boundaries can be problematic, particularly when unitary authority boundaries do not represent perceived reality. The lack of agreement between perception and the reality of political boundaries, make an urban locality a fuzzy geography. This fuzzy geography can be exploited, for example by estate agents who wish to alter an area to increase property values. By giving such fuzzy boundaries definition, better clarity can be achieved between estate agent and customer.

A method is proposed here that gives definition to the boundary of an imprecise region using the internet as the information source. Kernel density estimation is used to transform geo-tagged internet search results into a continuous surface. This is both compared and combined with a kernel density estimation of relevant Ordnance Survey MasterMap® cartographic text labels. A composite Index of Urban Locality is given to represent the fuzzy boundary of Clifton, Bristol. The resulting continuous surface is graded based on membership. Thus, the extent that a location is within or is not within the urban locality is depicted. The success of this output has been verified using estate agent’s interpretations of the boundary of Clifton. The Index of Urban Locality has also been applied to the region of Bedminster, Bristol, with some success.

Geospatial Modelling and Collaborative Reasoning of Indeterminate Phenomena: The Object-Field Model with Uncertainty and Semantics; Vlasios Voudouris (London Metropolitan Business School & City University London)

The need for a conceptually unifying geospatial data model for the representation of geospatial phenomena has already been acknowledged. Recognising that the importance of the data model employed by and large determines what can be done by way of analysis and the methods by which the analysis can be undertaken, there has been some activity in developing unifying data models for geospatial representation in digital form. Some successes have been reported. Nevertheless, progress has been slow, especially at the conceptual and logical levels of abstraction of geospatial data models.

Concepts and ideas from cognitive and perceptual psychology as well as GIScience and GISystems literature are examined within the context of geospatial data modelling and reasoning. Drawing on and combining these concepts, ideas and successes with an empirical approach, this work presents the fused Object-Field model with uncertainty and semantics at the conceptual and logical levels of abstraction.

The purpose of the Object-Field model is to better support the representation and collaborative reasoning of geospatial phenomena, particularly indeterminate phenomena such as town centres. It is shown that many of the concepts required to better represent geospatial phenomena can be derived from a single foundation that is termed the elementary-geoParticle. This serves as the standard for integrating the dual continuous-field and discrete-object data models by means of aggregation

GIS and Built Form: Using Pattern Recognition for Energy Efficiency Models; Donald Alexander, Simon Lannon, Orly Linovski (Cardiff University)

Much of what has been written about residential development in the UK relies on anecdotal evidence (Whitehand and Carr 1999). Little ‘on-the-ground’ research has been conducted due to the significant time required for investigating development through building records and other municipal data. A wide variety of research often requires detailed building information that has previously only been obtainable through walk-by surveys or building records. This paper examines alternative methods for determining building age using pattern recognition algorithms.

This model has wide ranging applications including researching urban development patterns, conducting urban design studies and assessing energy efficiency. This paper specifically focuses on the use of building data for energy efficiency studies. Modelling software has been developed to quantify energy emissions but requires detailed information of the built environment and age of buildings (Jones et al. 2000). It is proposed that pattern recognition algorithms can be used to automate the collection of this data from GIS and aerial photos.

To develop this technique, two study areas in Wales were chosen as case studies. These areas were surveyed manually to establish a baseline for assessing the built form characteristics of each development that could be incorporated into the algorithm. This paper will present the results of the development characteristic study, as well as the efficacy of using these to determine the age of dwellings.

Saturday, August 30, 2008

Towards a General Field model and its order in GIS

An interesting paper has been published in the IJGIS by Y. Liu, M.F. Goodchild, Q. Guo, Y. Tian, and L. Wu title: Towards a general field model and its order in GIS. This is very closely related with the works of Cova and Goodchild (2002) and Kjenstad (2006) and with my PhD work which has been reported in Voudouris, Wood and Fisher (2005), Voudouris, Fisher and Wood (2006), Voudouris and Marsh (2007) and Voudouris (2008) - the full PhD will be made available soon after the PhD viva examination.

Abstract
Geospatial data modelling is dominated by the distinction between continuous-
field and discrete-object conceptualizations. However, the boundary between
them is not always clear, and the field view is more fundamental in some respects
than the object view. By viewing a set of objects as an object field and unifying it
with conventional field models, a new concept, the General Field (G-Field)
model, is proposed. In this paper, the properties of G-Field models, including
domain, range, and categorization, are discussed. As a summary, a descriptive
framework for G-Field models is proposed. Then, some common geospatial
operations in geographic information systems are reconsidered from the G-Field
perspective. The geospatial operations are classified into order-increasing
operations and non-order-increasing operations, depending on changes induced
in the G-Field’s order. Generally, the order can be viewed as an indicator of the
level of information extraction of geospatial data. It is thus possible to integrate
the concept of order with a geo-workflow management system to support
geographic semantics.

The paper can be downloaded from:
http://www.geog.ucsb.edu/%7Egood/papers/451.pdf

------
Refrences
Cova, T.J. and Goodchild, M.F. (2002) Extending geographical representation to include
fields of spatial objects. International Journal of Geographical Information Science, 16,
pp. 509–532.

Kjenstad, K., (2006) On the integration of object-based models and field-based models in
GIS. International Journal of Geographical Information Science, 20, pp. 491–509.

Voudouris, V (2008) Geospatial Modelling and Collaborative Reasoning of Indeterminate Phenomena: The Object-Field Model with Uncertainty and Semantics. Pesented at the RGS-IBG International Conference 2008.

Voudouris, V., Marsh, S., (2007) Geovisualization and GIS: A Human Centred Approach. In Visual Languages for Interactive Computing: Definitions and Formalizations (Eds, F. Ferri), Idea Group Inc.

Voudouris, V., Fisher, P.F., Wood, J., (2006) Capturing Conceptualization Uncertainty Interactively using Object-Fields. In: Kainz, W., Reid, A., Elmes, G.(2006) (Eds). 12th International Symposium on Spatial Data Handling (Vienna, Austria). Springer-Verlag.

Voudouris, V., Wood, J., Fisher, P.F., (2005) Collaborative geoVisualization: Object-Field Representations with Semantic and Uncertainty Information . In: Meersman, R., Tari, Z., Herrero, P., et al (Eds).On the Move to Meaningful Internet Systems OTM 2005, Lecture Notes in Computer Science (LNCS), Vol 3762, Springer, Berlin

IJGIS Valediction by Peter Fisher


Peter Fisher's Valediction is very interesting and promising for those who work in the area of geospatial data modelling (or representation) as It is the foundation of all else that is possible or can be done.

A pdf version of Fisher's Valediction can be accesses from http://www.informaworld.com/smpp/content~content=a784379045~db=all~order=page or you can read the html version below:

IJGIS Valediction

1. Introduction
I have been editing the International Journal of Geographical Information Science for the last 14 years. I was first associated with the journal in this role in 1994 during the publication of volume 8. Then, it was publishing 6 issues per year with a target length of 600 pages, which allowed approximately 30 articles to be included. In 1996, this increased to 800 pages, and in 2005 to 1200 pages. The journal now carries nearly 60 articles per year, and will have a modest increase in volume again next year.

I took over the journal from the capable hands of Professor Terry Coppock, who worked diligently to establish the journal as one which would persist and become the premier journal of record for those working on the development and application of geographical information systems, whatever their background. I have endeavoured to maintain the status of the journal, and I believe that I have. Elsewhere (Fisher 2007b), I have listed some of the competitor journals of IJGIS. IJGIS is unusual because it is listed in the Journal Citation Rankings of ISI's Web of Knowledge in four subject areas (Geography, Physical Geography, Information Science, and Computer Science). Among competing journals, IJGIS has had the highest impact factor over a number of years.

Volume 11 saw the journal published under its present title, when the name was changed from IJGISystems to IJGIScience, in recognition of the fact that the journal had always been engaged in the publication of research into the science of geographical information which underpins the systems that are in widespread use.

2. A personal view of research published in IJGIS
The research that has been published in IJGIS over the years can be divided many ways, but I choose to look at it as is illustrated in figure 1. These are the themes I see which have persisted through the 14 years. The structure identified here was first articulated at a presentation at the AGILE 2007 Annual conference in Aalborg. I would like to thank the organizers (including Lars Bodum and Monica Wachowicz) for inviting me to give that presentation.




Figure 1. Personal view of general research topics published within IJGIS.


To me, the most important research theme is that of Representation. It is the foundation of all else that is possible or can be done with geographical information. I view it as having five components:

Spatial Information Theory addresses how we conceptualize spatial information, and is absolutely central to GIScience. It has been a persistent theme with issues of RESELS, geoatoms, object orientation, and multiscale and multiresolution information as part of it.
Issues of Uncertainty in its broadest sense may be the most common research topic published in IJGIS. This includes probabilistic and fuzzy formalisms, error modelling, rough sets, and semantic uncertainty, among others.
Researchers have long bemoaned the lack of Temporality in geographical databases, but over the 14 years, many papers have been published in this area.
IJGIS has not been slow in publishing the results of Ontological research both from a database construction point of view and from a semantic understanding point of view.
Finally, and perhaps a smaller component than is desirable, is the research on Geometric representation.
The second broad topic is modelling, which, for convenience, I divide into:

analytical and statistical modelling, including network modelling and spatial statistics; and
process modelling, including modelling of social and environmental processes and the technology of those models.
Visualization has always been a major theme within GIScience, and of course, cartography, and computer cartography in particular, is one of the antecedents of the field. Many interesting papers and special issues have been published on topics from this field, including generalization, visual analytics, geocollaboration, and interactive mapping.

Cognitive studies and usability are concerned with how we relate to the world and the information about that world. There are increasing studies on usability, but my personal view is that studies on spatial cognition should ground much research in GIScience but have not been published in IJGIS, with a very few exceptions. I hope that the future may see more such research linking these areas.

A final persistent theme has been that of data policy with which I bracket social construction of information. The first has been researched in many ways and most recently within the umbrella of Spatial Data Infrastructures. The social construction argument is seen by some as anti-scientific, but in my view it is part of all information, as some recent studies have demonstrated, and those studies have shown some potential for working with different world views within GIScience.

Over the 14 years, paper submissions on some research topics have ceased. Parallel processing on which a special issue was published in Volume 10 has become a low-level system issue, with barely a mention of the topic in more recent issues of IJGIS. Similarly, Interoperability was the topic of a special issue in Volume 12, but it too has not been addressed directly in much writing in IJGIS since. The topic remains important, but within the research published within IJGIS, it has been subsumed within the interoperability of data, or within the developing area of Web technologies. Another person might see papers on Web technologies as another emerging component of the IJGIS research literature, but currently I see the Web issues as one that touches many of the other topics raised, particularly Visualistation and Data Policy.

Discussion of the structure outlined here has led others to suggest to me that the World Wide Web, Location Based Services and Global Change are so called 'killer applications' for GIScience, and so might be viewed as themes for structuring the field. These are all interesting areas for research with their own challenges and problems. However, I would rather see these as important areas for application, along with many others, rather than as driving forces. I believe that when an application becomes a driver, it moulds the science, and I do not believe that all applications will fit one mould. Therefore, it is necessary to keep the independence of the core issues as central to GIScience, and not view GIScience as issues of any one application.

3. Issues in producing the Journal
Many issues could be mentioned in the production process, but two stand out for me. The first is with respect to reviewers, and the second is the preparation of graphics.

3.1 Problems with reviewers
The most intractable problem in managing any peer-reviewed journal is making timely decisions on articles. This process is a trade-off between the need for reviewers to have time to read a paper, and an author's wish to have a rapid response, as well as the editor's wish to 'have a life' and do some of their own research. The most frustrating part of managing the process is that reviewers repeatedly promise to complete a review within a particular time period, but fail to do so. This can be for understandable reasons, but when the reviewer then promises to do the review by some new date but fails, and promises again and again, the process becomes very frustrating for everyone.

When I first started editing, a member of the editorial board said to me 'I hope to complete three reviews for every paper I publish'. 3 to 1 is the minimum ratio of reviewed to published papers to which all active researchers should commit. Because of rejections, unfortunately the ratio actually needs to be considerably higher. Unfortunately, there are people who will never return a review, no matter how many times they promise, and there are others who will always return a review, once they have said they will. Research productivity and administrative responsibility is no indicator as to group membership—some of the busiest people are the most reliable. But if you publish one paper, you should commit to reviewing at least three papers, and you should do them as if you were the author, in a prompt and timely manner.

3.2 Problems in graphcs
Authors should be more careful in their design of graphics, graphs, and maps. Perhaps the worst are the graphs generated in modern spreadsheets. One particular spreadsheet package uses grey backgrounds so that graphs are highly visible on the screen, but when these are printed, the grey tends to obscure the actual graph, as do such ephemera as the grid lines and oversized point markers. Unfortunately, many authors seem to be ignorant of the design guidelines of Edward Tufte (1983), which should be studied with care by all involved in illustrating scientific articles. Authors should be prepared to make multiple changes to graphs in the process of preparing an article, using smaller symbols and clear, white backgrounds. Similarly, many authors use grey fills for boxes in flow diagrams. On the whole, these are completely redundant and only obscure the text within the boxes. Boxes should be white, with the outline used to code the boxes, if that is desirable.

In the print technology used by the publishers, colour continues to be expensive, but colour in the electronic version of papers is free. This means that as much colour as an author wishes to include can be carried in any article, but the print version of that article may include all those graphics in greys. The problem with this is that many colours will produce the same grey, so that if information is colour-coded, but the print version is in grey, the coding may not carry over. Therefore, authors need to continue to be be careful in their use of colour and, where necessary, may need to consult experts in the use of colour.

4. Is it still research?
There are many interesting and challenging research topics to be addressed in GIScience, but there are some topics which might be considered to be pass for publication in IJGIS. Without wishing to put off researchers, I would like to mention two here.

First is the annual assault on the editors of papers documenting yet another instance of a raster-GIS implementation of the Universal Soil Loss Equation (USLE), by authors who have not read the literature well enough. This topic was first addressed in the 1980s by, for example, Spanner et al. (1983), and papers being submitted in 2007 are very little different. I am not saying that soil erosion modelling is pass, but as scientific research the USLE is, both within and outside GIS. On the contrary, research relating to more advanced soil erosion models is welcome, and excellent contributions have been included in the Journal, when they meet the review standards.

Similarly, many articles have been written on comparisons of a modest number of surface interpolation algorithms in an experimental situation (whether from point observations or contours and using IDW, spline, and kriging, perhaps). Papers continue to be submitted doing no more. It is easy to conceive of such an experiment, but it is a real challenge to make it original and different from previous experiments, and to demonstrate that the conclusions can be generalized to other contexts. Generation of digital elevation models is no longer dependent on the interpolation of values from sparse point observations or contour lines, but has moved over to measurement-based remote sensing devices such as Lidar and Ifsar. Interpolation remains important for these technologies, but the issues have changed. Future experiments need to be demonstrably relevant.

5. Thanks
During the 14 years I have been working on the journal, I estimate that about 625 papers will have been published, which means that something of the order of 1800 papers have been submitted. A number of people in various roles have been involved, and I would like to record my thanks to them all (in spite of having listed many in a previous acknowledgement; Fisher 2007a):

First are those people whose work has been published in the journal over the last 14 years. I thank them for taking the time to conduct the interesting research they have submitted, and for writing it up. Almost without exception, they have taken criticism from reviewers and papers have gone through changes in the review process. We believe that the published papers which result are better than those originally submitted, but making the changes can be nonetheless painful for the authors. It has been a pleasure for my colleagues and I to see this work through the review process.
Because each paper is sent to at least three reviewers, approximately 5400 requests for reviews have been dispatched. I am ashamed to say that I have no idea how many reviewers this equates to, because I do not know how many have been asked more than once, although I suspect it is the majority. My thanks go to all those who have responded with reviews, when requested. The work involved in taking time and care to consider and critique the work of others cannot be understated, but it can also be most rewarding. Foremost among these reviewers have been members of the editorial board.
All journals have two classes of author: authors whose work is accepted, and those whose work is rejected. The acceptance rate is approximately 30% of submissions, and therefore the latter group is about twice the size of the former (except that, of course, some authors are in both categories), and having taken the effort to conduct the research and write the paper to then have it rejected for publication is always very dispiriting. These are the unacknowledged facilitators of the peer-review process, and I would like to take this opportunity to thank them all, because their work has come to nothing and will not be published in this journal.
During the 14 years, 16 special issues have been published, and a number more are in preparation. The editors of these issues are numerous, but they are acknowledged by being the authors of guest editorials.
It has been my pleasure to work with a number of other people in editorial roles, including Eric Anderson, Steven Guptill, Marc Armstrong, Harvey Miller and now Mark Gahegan as North American Editors (now Editor for the Americas), and Dave Abel and now Brian Lees as editors for the Western Pacific (now Editor for Australasia and Eastern Asia). I have worked with Neil Stuart, Nick Tate, and Lex Comber as Book Review Editors.
Throughout my period as editor, the Publisher's principal representative has been Richard Steele. Direct managerial contacts for the journal have been Meloney Bartlett, Rachel Sangster, and Virginia Klaessen. On the production side, managing the work of anonymous typesetters and copy editors, are the people with whom authors have communicated about proofs (whether they know it or not). They have been David Chapman, Sophie Middleton, Heidi Cormode, and currently James Baldock.
Finally, I must thank Jill Fisher, who has given continuing support and assistance in communicating with authors and reviewers.
The system of peer review, which is the current paradigm for scholarly publication, would not work without all these players; all are crucial to the process. My thanks to all these people in their various roles, from reviewers and authors, to editors and production managers, and to anyone else I should have named but have not. The last 14 years would not have been possible without each and every one of you.

I would like to close by offering my very best wishes for continuing success of the journal to the future editorial team, including Brian Lees (Australian Defense Force Academy, University of New South Wales) as both Editor in chief and Editor for Australasia and Asia, Mark Gahegan (Pennsylvania State University) as Editor for the Americas, and Sytze de Bruin and Monica Wachowicz (Wagenigen University) as Editors for Europe and Africa. I hope that they find it in as good condition as Terry Coppock left it for me.

References
1. Fisher, P. F. Fisher, P. (ed) (2007a) Preface.. Classics from IJGIS: Twenty Years of the International Journal of Geographical Information Science and Systems pp. v-vi. Taylor & Francis , London
2. Fisher, P. F. Fisher, P. (ed) (2007b) 20 years of IJGIS: Choosing the classics.. Classics from IJGIS: Twenty Years of the International Journal of Geographical Information Science and Systems pp. 1-6. Taylor & Francis , London
3. Spanner, M. A., Strahler, A. H. and Estes, J. E. (1983) Soil loss prediction in a geographic information system format.. — In Papers Selected for Presentation at the 17th International Symposium on Remote Sensing of Environment. Volume 1, pp. 89-102. (Environment Research Institute of Michigan, Ann Arbor.)
4. Tufte, E. R. (1983) The Visual Display of Quantitative Information Graphic Press , Cheshire, CT

Thursday, May 01, 2008

Unlocking Economic Systems with Agent-Based Computational Economics: The EU Leasing Market


THis is the work that I (with Haris) present at the RGS-IBG International Conference 2008.

Studies of economic systems must consider how to handle interdependent feedback interactions of micro behaviors, interaction patterns and macroscopic regularities. The Agent-Field framework is an approach for agent-based computational economics. In this framework, models of economic systems are viewed as a collection of multi-scale and structured agents operating in indeterminate economic environments conceptualized as continuous, differentiable fields with variable levels of spatial uncertainty. We propose formalization of the Agent-Field framework using the Unified Modeling Language. We explore potential advantages and disadvantages of the framework for the study of economic systems using the EU leasing market. This enables us to formulate an initial frame representation of major economic agents for the EU leasing market. We predicted the direction of the Central and Easter cluster of Europe's high growth economies can be expected to take, as its economies move towards higher prosperity levels. Within the scope of the work, it has been shown that the Agent-Field framework is an intuitive rather than an abstract process in modeling economic systems. This intuitive process needs more understanding of the interactions between the economic environment and the agents within it. The Agent-Field approach seems ontologically well founded for the growing field of agent-based computational economics.

Monday, April 07, 2008

Geospatial Analysis: GIS & Agent-Based Models


This year I organise the Geospatial Analysis: GIS & Agent-Based Models session at the RGS-IBG Annual Conference 2008 in London. We hope that the session will attract interest from users of GIS and Agent-based models for the analysis of geospatial phenomena, and particularly those who are interested in the fusion of these two areas. The deadline for submission to this session is 17th April 2008. Abstract should be sent to v.voudouris@londonmet.ac.uk

Sunday, February 03, 2008

On the Integration: GIS with Agent-Based Models


ArcGIS now interacts with Repast using the Agent Analyst:


The Agent Analyst is a free and open source ArcGIS extension that allows ArcGIS users to build geographically aware agent-based models. Agent Analyst achieves this goal by integrating the free and open source Recursive Porous Agent Simulation Toolkit (Repast) into ArcGIS (see here for details).


This offers interesting opportunities for both the agent-based community (see Batty, 2005) and GIS community (see Repast Vector GIS Integration for details). In my PhD thesis, i am suggesting a way of integrating agent-based wIth GIS using the object-field model (details will be posted soon).

Reference
Batty, M (2005), 'Approaches to Modelling in GIS: Spatial Representation and Temporal Dynamics'. In Maguire, Batty and Goodchild (eds.): GIS, Spatial Analysis and Modelling, ESRI Press

Agent-Field Economic Model

Recently, I have done a work about Agent-Based Computational Economics and the Object-Field model as a novel way to explore Economic Systems. This is my (with Haris) proposal:

Economies are complex adaptive systems encapsulating micro structures and behaviors, interaction patterns, and macroscopic regularities. Thus, studies of economic systems must consider how to handle interdependent feedback interactions of micro behaviors, interaction patterns and macroscopic regularities.
One such approach is the Agent-Field approach of the agent-based computational economics. In this framework, models of economic systems are viewed as collections of multi-scale and structured economic agents from the real world such as individuals, social groupings, institutions and physical entities and as smooth, continuous economic environments called fields. In other words, the Agent-Field framework is a fused agent-based model by capturing agents in indeterminate economic environments conceptualized as continuous, differentiable fields with variable levels of spatial uncertainty and embedded semantics. The science of the Agent-Field model is drawn from the field of Geographic Information Science (GIS) models (particularly the Object-Field model) and the field of Agent-Based Computational Economics. Thus, a common base-model for the Agent-Field framework is proposed by giving it a formal definition using the Unified Modeling Language (UML). We explore potential advantages and disadvantages of the Agent-Field framework for the study of economic systems using the EU leasing market economy as an example of demonstrating the application of the framework. This also enables us to formulate an initial frame representation of major agents and smooth, continuous economic environments for the EU leasing market (leasing being one of many ways in which businesses finance their capital investments). Each national leasing market can be viewed as an agent, with a range of particular internal dynamics that gives it specific character (e.g. preference of national businesses in the use of leasing over time, expectation for future economic growth, attitudes towards other forms of financing investments etc). At the same time, a number of exogenous 'forces' also have an effect over each agent: forces such as the evolution of other national economies in close proximity, cross-border economic activity, pan-European taxation/regulation changes etc. By studying the leasing penetration in each national market (defined as the ratio of new yearly leasing volumes by the total yearly fixed capital formation in each economy) and comparing them with a measure of each economy's overall wealth (e.g. GDP per capita), Europe's national leasing markets fall into three clusters of agents: the first includes economies that are both large and wealthy (viewed in GDP/Capita terms) with a mature leasing market reaching high penetration levels. The second cluster includes economies that are wealthy and mature, but show very low leasing penetration levels. A third distinct cluster includes broadly the new EU entrants, i.e the smaller but high growth economies of Central and Eastern Europe, characterised by low GDP/Capita levels and at the same time exhibiting high leasing penetration levels. An Agent-Field model can be developed to map the dynamics that drive each cluster of economies, so as to help predict the direction that the third cluster of Europe's high growth economies can be expected to take, as its economies move towards higher prosperity levels. Within the scope of the work, it has been shown that the Agent-Field approach appears to be an intuitive rather than an abstract process in modeling economic systems. This intuitive process needs more understanding of the interactions between the economic environment and the agents within it as these elements represent the logic underlying the problem at hand rather than mathematical notation. The Agent-Field approach seems ontologically well founded for the growing field of agent-based computational economics.

Wednesday, January 16, 2008

Intelligent Memory: Understanding the conceptualization process


Myself, with Jo Wood and Peter Fisher, discussed the idea of conceptualization uncertainty in the SDH2006. We argued that conceptualization uncertainty is introduced during the conceptualization of a phenomenon rather than due to measurement error.

Barry Gordon, professor of neurology and cognitive science, presented the concept of Intelligent Memory which is the mostly unconscious, lighting-fast thought process that connects pieces of memory and knowledge in order to generate new ideas. It's the memory that aids us in making everyday decisions, gives us the chuckle of a good joke, sparks a "Eureka!" solution to a problem, and enables us to enjoy a work of art. Intelligent Memory is what powers most of our mental life.

I personally think that understanding intelligent memory can give us some ‘clues’ about how people conceptualise and argue about indeterminate phenomena such as town centres.

The Science Behind Intelligent Memory

What follows is an in-depth explanation of the neuroscience behind the Intelligent Memory concept. It's for readers who want to understand the scientific underpinnings of memory and learning.
All memories, along with every perception, action and thought, arise from the activity of nerve cells. However, the memories that we are conscious of and which are important to us, obey somewhat different rules than nerve cells. This is makes sense, given that our important memories generally require the coordinated action of thousands, if not millions, of nerve cells.
Paradoxically, some of these nerve cells help generate memories by not being active or not communicating with other neurons. They're somewhat like the essential patches of blank canvas that an artist uses to suggest clouds or a piece of reflected light. Another analogy can be found in the printing that you are reading at this moment. The letters and the words the ink forms are meaningful because of where the ink is, and is not.
The key to memory is time. In essence, memory is a displacement of knowledge a little bit into the future. Or, from a future perspective, it's the retrieval of knowledge from the past. This knowledge can be latent, or unused, or active and available. When nerve cells are firing, they are actively carrying information, and so the memory is active, and usable.
But this form of memory is also transient and by itself, it can exist only a few fractions of a second. What makes a memory permanent is not a nerve cell constantly firing but over time acquiring more potential for being able to fire. In other words, a nerve cell becomes more sensitive to firing or to staying quiet. This sensitivity to being triggered into action can be varied up or down. The processes that change susceptibility are built into nerve cells. There are many of these processes, including temporary changes in the permeability of the nerve cell membrane and permanent changes within its DNA. Correspondingly, they can take place over different time scales. Changes in the permeability of the nerve cell membrane can occur in fractions of the second, while changes in the proteins within a nerve cell may take hours to days to generate. And DNA may take weeks to months or years to change.
One of the crucial contributors to nerve cell sensitivity is individual experience - whether and how often they've fired before. If a nerve cell has been triggered to fire in the past, in general it will be more sensitive to those triggers in the future. Yet, if a nerve cell has been active over long periods of time, it gradually becomes less sensitive and needs increasingly more stimulation to set it off or produce changes.
Oddly enough, this intrinsic regulation is basic to creating intelligent memories. This regulation, when repeated over and over, produces particular kinds of memories - memories that arise through practice. Repeating a thought or action strengthens and weakens individual connections between nerve cells, and the upshot of many connections is learning. By and large, this learning happens relatively slowly. It takes a fair amount of repetition to convince nerve cells to be more sensitive the next time. Doing something once doesn't do it. Doing something twice or three times doesn't do it. But doing something hundreds or thousands of times definitely does.
You know these kinds of memories well. They are the memories you acquire when you learn how to ride a bicycle, to drive a car, to play golf or to add 2 + 2. As you acquire them, you can strengthen them quickly if each time you think about the precise right way and immediately correct your mistakes. However, if a task is complicated, you need a great deal of practice.
Although so far the focus has been on individual nerve cells, keep in mind that most of the memories and activities that mean anything to us take long chains of nerve cells. Catching a ball requires chains for seeing as well as chains for hand control. Nevertheless, individual nerve cells and connections between them are the basis for these activities.
Getting back to how nerve cells form memories and learn: on their own, individual nerve cells don't decide whether to learn. Brains as complex as ours have additional circuits of nerve cells that monitor what's important and what needs to be repeated and remembered. Such circuits control how other neural circuits learn. They can even force neural circuits to learn quickly. ("Enough daydreaming - remember this!") Or, they can stop them from learning at all. These control circuits also dictate how the more basic neural circuits are wired together, which get inputs and which do not, and which chains of circuits are beefed up and which are broken up and rewired.
And, as you may have guessed, our brains also have circuits that monitor and control the controlling circuits. And there are undoubtedly monitors and controls for the monitoring and controlling circuits, and so forth. Neuroscience doesn't completely know how many levels of controls our brains possess. They're hard to identify or track down because there is not a strict hierarchy. Instead, some controlling circuits seem to influence other controlling circuits at the same level and sometimes lower-level processes can boss around their controllers.
Our brain's basic wiring plan governs how we perceive, act, think, and remember. But to understand intelligent memories, we need to elaborate beyond this basic scheme and look at the links between nerve cells and nerve circuits. It's these connections which are the true building blocks of thoughts, and Intelligent Memory. ("Intelligent Memory" is our shorthand term for all the different intelligent memories. They all work much the same way; it's just their specific contents - such as words or images - that differ.)
What we think of as a single thought in our mind - "ball" for instance - is composed of many fragments of thoughts. If you think about a ball, you do not normally separate its color from its roundness or its bounciness. However, your brain does. Its color and shape and function are stored in different regions of the brain, although not every distinct element has its own region.
In the brain, these elements of thought are represented by patterns of activity in many nerve cells. These patterns can be active and the nerve cells firing, or they can be latent, existing in the pattern and strengths of connections between sets of nerve cells. An idea in our mind -- whether it's the color or the shape or movement of a ball -- is represented in the activity or latent activity of these sets of nerve cells as a whole. And thoughts that we are very interested in are likely to involve thousands if not tens of thousands or more nerve cells.
Most complex thoughts have to be learned; they are not innate. When elemental thoughts arise from the senses, its usually constant exposure, like playing with balls as a child, that gradually produces the whole idea inside our minds. The same process seems to be at work for thoughts or concepts that have no obvious sensory or other correlates.
Elements of thoughts are linked in many ways. Sometimes they are linked just by being part of the same entity in the outside world, as in the case of the ball. In this case, there are linked by experience. But the most interesting links for our purposes -- the links that make up intelligent memories -- are ones we discover and put into place. They are the links, for example, that allow a child to see the similarity between the ball he is throwing and the planet he is standing on.
The links between elements of thoughts, or between thoughts themselves, are patterns of neural activity, either active or latent. Therefore, they can be learned.
Links between thoughts produce thinking. Some kinds of thinking generated by these links may seem so ordinary that we don't call it thinking at all. Being hungry, passing a candy machine, and stopping to put in a coin is hardly a Nobel prize-winning connection. But even these thoughts required having the elements inside of our head (some coming internally, from our hunger; others coming externally, from the image of the candy machine) and then making the connection between them. (It also involved acting upon that connection.)
Solving harder, more complex problems requires more and better connections. But this should not obscure the fact that elements of thought and the links between them are nevertheless necessary. Moreover, it is easy to understand that creative thinking occurs when the links go in unpredictable directions or towards goals we did not set in advance. But they are still links, and they still arise from the same nerve cell activity and the same learning process.
Links are the streets that take us from thought to thought. But finding connections between thoughts, or finding the best ones, can be like trying to find the best route to a destination. The first route we explore may have many false starts or roads that look good on paper but don't work in practice. With time, though, we find a shorter work or faster route. So it can be with thinking. Over time, we can prune away the false starts and wrong directions, and eliminate the links that look good originally but prove to be rocky or laborious or time-consuming.

This process of finding the best mental route is the essence of training our thinking. But from the perspective of what nerve cells must do to be trained to think, it is also learning. Memory mediates mental training. This memory, this learning, is what helps make us intelligent. It's also a basis for intelligent memories.
Nerve cells also comprise the circuits that monitor the links and open and close the routes, and these, too, can learn and can improve. The controlling systems, these guidance providers inside of our heads, can be trained and so form another site for intelligent memories.

At least two more physical facts about memory and our brains figure into an understanding of our thinking, learning and creativity, and how they can be improved. One of them relates to how learning can be enhanced. The other relates to how we create miniature intelligences in our minds to help eliminate the bottlenecks of certain kinds of thinking.
Nerve cells learn when they are exercised. Practice, which stimulates connections, makes nerve cells learn. However, nerve cells also learn when we tell them to. When we deliberately activate the circuits that signal something is important, the circuits pass on the message and tell the appropriate other nerve cells that what is happening is important and should be learned well. This happens, for instance, with the learning involved in memorizing facts, names or faces.
While it is less clear that the circuits involved in learning connections between thoughts can be revved up this way, it seems almost certain that interest and motivation synergistically tickle nerve cells and make them learn much faster. So this is another mechanism we can use to enhance our Intelligent Memory.
The bottleneck mentioned earlier arises with our conscious thinking and attention. When we are consciously and fully alert, we can keep no more than a few thoughts in our mind at once. (Perhaps just only one thought at a time can be maintained consciously.) Our unconscious, automatic minds, on the other hand, do not have such a bottleneck or limitation. And fortunately, much of our mental activity takes place unconsciously and automatically. When you walk, you don't think about every irregularity in the pavement, or every curb you step on. Those perceptions, decisions, and actions are handled automatically and unconsciously.
Your mind did not always perform such mental tasks automatically. There was a time when you had to learn them. As an infant, you had to learn to walk, which required paying attention to the terrain in front of you and coordinating what you saw and felt to how your body reacted. A better example of the process may be when you learned how to drive a car.
When you learned to drive, you had to learn to pay attention to everything going on and everything you had to do. You watched your hands on the steering wheel, the hood of the car, each sign and traffic light, the other cars on the road, and every pedestrian. You also had to think about what to do in situations: the stop sign or the yield sign, a car getting too close, a pothole. But as you practiced driving and became better, your ability to detect what was happening on the road as well as your reactions became more automatic. You didn't have to consciously look for a stop sign or a red light in order to notice it and automatically respond the right way. And if a pothole suddenly appeared, you knew you would immediately see it and not only swerve but check your mirrors for other cars nearby and slow down.
What you did through all this practice and attention was create automatic mental abilities. You used your conscious mind and deliberate intention to instruct your brain on what to attend to, what decisions to make, and what to be done. Your conscious mind programmed the necessary circuits in your brain. It instructed your vision to pay attention to the color red on a light or a sign. In addition, your mind established a network of override circuits so that the need to stop would take precedence over almost everything else. It also set up a watchdog circuit, so you would not stop too quickly if a car was on your tail. Finally, it programmed what you have to do to stop: take your foot off the gas and push the brake pedal. All these mental processes had to be laid down and practiced to the point that they became instinctive, like a separate intelligence or "minimind" operating on its own.
Now that you are an experienced driver, this minimind is vigilant whenever you're behind the wheel, ready to respond to any stop sign or stop light. You don't have to think about it and it no longer requires your conscious attention. Because it's automated, it works in parallel with your conscious mind. It augments your abilities. It augments your intelligence.
Elementary mental processes are relatively rapid. They operate in hundreds of a second, or at their slowest, tenths of a second. However, these elementary mental processes are often strung together in chains and loops and these strings of processes often take a fair amount of time to unfold. Conscious minds may need more than a second to appreciate a situation, and several seconds of backwards and forwards thinking to come up with a response. Our unconscious, automatic minds, on the other hand, are much simpler and more direct, and can work much faster. A baseball thrown by a professional pitcher moves too quickly from the pitcher's mound to the plate for a batter's conscious thought to react (which takes a minimum of 1/4 of the second). But the batter can preprogram his miniminds to watch the pitcher's throw and to watch the ball, so that his swing has a decent chance of connecting.
All of your thinking, all of your decisions, all of your creativity comes from the same kind of miniminds you apply to skillful driving. But these miniminds cannot always substitute for careful, deliberate thinking. Sometimes, the information they use is too limited, and the judgments they make are too quick. Still, they augment the powers of your conscious mind, which usually does not have the luxury of unlimited evidence and slow, deliberate thinking.
These miniminds, which represent intelligent memories, take time to be constructed, but they are extremely persistent once they have been built. This is often an advantage, since a useful mental tool should be kept around. However, this persistence can also cause major problems. Problems can arise when a minimind has not been constructed properly or when its operation has taken wrong turn that becomes permanent. For example, making a snap judgment using these miniminds is a big reason people make errors on everyday problems, particularly those involving statistics and logical thinking.
A first step in enhancing your miniminds is to understand what types you have available. The ones that work well can be left alone, while the ones that repeatedly make mistakes need to be retrained. When you survey your mental abilities and needs, you may well discover that you need certain abilities -- miniminds -- that you do not currently have. These gaps need to be identified and filled, and to take their place alongside your high-functioning miniminds. And, of course, you need to train the intelligent memories that orchestrate these particular miniminds, so the right ones can be used in the right situations.
Now you know more of the details about why we can have Intelligent Memory, and why we can consciously exercise this memory and make it stronger.