Introduction to GIS with R

Spatial data with the sp and sf packages

The geographic visualization of data makes up one of the major branches of the Digital Humanities toolkit. There are a plethora of tools that can visualize geographic information from full-scale GIS applications such as ArcGIS and QGIS to web-based tools like Google maps to any number of programing languages. There are advantages and disadvantages to these different types of tools. Using a command-line interface has a steep learning curve, but it has the benefit of enabling approaches to analysis and visualization that are customizable, transparent, and reproducible.1 My own interest in coding and R began with my desire to dip my toes into geographic information systems (GIS) and create maps of an early modern correspondence network. The goal of this post is to introduce the basic landscape of working with spatial data in R from the perspective of a non-specialist. Since the early 2000s, an active community of R developers has built a wide variety of packages to enable R to interface with geographic data. The extent of the geographic capabilities of R is readily apparent from the many packages listed in the CRAN task view for spatial data.2

In my previous post on geocoding with R I showed the use of the ggmap package to geocode data and create maps using the ggplot2 system. This post will build off of the location data obtained there to introduce the two main R packages that have standardized the use of spatial data in R. Thesp and sf packages use different methodologies for integrating spatial data into R. The sp package introduced a coherent set of classes and methods for handling spatial data in 2005.3 The package remains the backbone of many packages that provide GIS capabilities in R. The sf package implements the simple features open standard for the representation of geographic vector data in R. The package first appeared on CRAN at the end of 2016 and is under very active development. The sf package is meant to supersede sp, implementing ways to store spatial data in R that integrate with the tidyverse workflow of the packages developed by Hadley Wickham and others.

There are a number of good resources on working with spatial data in R. The best sources for information about the sp and sf packages that I have found are Roger Bivand, Edzer Pebesma, and Virgilio Gómez-Rubio, Applied Spatial Data Analysis with R (2013) and the working book Robin Lovelace, Jakub Nowosad, Jannes Muenchow, Geocomputation with R, which concentrate on sp and sf respectively. The vignettes for sf are also very helpful. The perspective that I adopt in this post is slightly different from these resources. In addition to more explicitly comparing sp and sf, this post approaches the two packages from the starting point of working with geocoded data with longitude and latitude values that must be transformed into spatial data. It takes the point of view of someone getting into GIS and does not assume that you are working with data that is already in a spatial format. In other words, this post provides information that I wish I knew as I learned to work with spatial data in R. Therefore, I begin the post with a general overview of spatial data and how sp and sf implement the representation of spatial data in R. The second half of the post uses an example of mapping the locations of letters sent to a Dutch merchant in 1585 to show how to create, work with, and plot sp and sf objects. I highlight the differences between the two packages and ultimately discuss some reasons why the R spatial community is moving towards the use of the sf package.

[Read More]

Introduction to Network Analysis with R

Creating static and interactive network graphs

Over a wide range of fields network analysis has become an increasingly popular tool for scholars to deal with the complexity of the interrelationships between actors of all sorts. The promise of network analysis is the placement of significance on the relationships between actors, rather than seeing actors as isolated entities. The emphasis on complexity, along with the creation of a variety of algorithms to measure various aspects of networks, makes network analysis a central tool for digital humanities.1 This post will provide an introduction to working with networks in R, using the example of the network of cities in the correspondence of Daniel van der Meulen in 1585.

There are a number of applications designed for network analysis and the creation of network graphs such as gephi and cytoscape. Though not specifically designed for it, R has developed into a powerful tool for network analysis. The strength of R in comparison to stand-alone network analysis software is three fold. In the first place, R enables reproducible research that is not possible with GUI applications. Secondly, the data analysis power of R provides robust tools for manipulating data to prepare it for network analysis. Finally, there is an ever growing range of packages designed to make R a complete network analysis tool. Significant network analysis packages for R include the statnet suite of packages and igraph. In addition, Thomas Lin Pedersen has recently released the tidygraph and ggraph packages that leverage the power of igraph in a manner consistent with the tidyverse workflow. R can also be used to make interactive network graphs with the htmlwidgets framework that translates R code to JavaScript.

This post begins with a short introduction to the basic vocabulary of network analysis, followed by a discussion of the process for getting data into the proper structure for network analysis. The network analysis packages have all implemented their own object classes. In this post, I will show how to create the specific object classes for the statnet suite of packages with the network package, as well as for igraph and tidygraph, which is based on the igraph implementation. Finally, I will turn to the creation of interactive graphs with the vizNetwork and networkD3 packages.

[Read More]

Geocoding with R

Using ggmap to geocode and map historical data

In the previous post I discussed some reasons to use R instead of Excel to analyze and visualize data and provided a brief introduction to the R programming language. That post used an example of letters sent to the sixteenth-century merchant Daniel van der Meulen in 1585. One aspect missing from the analysis was a geographical visualization of the data. This post will provide an introduction to geocoding and mapping location data using the ggmap package for R, which enables the creation of maps with ggplot. There are a number of websites that can help geocode location data and even create maps.1 You could also use a full-scale geographic information systems (GIS) application such as QGIS or ArcGIS. However, an active developer community has made it possible to complete a full range of geographic analysis from geocoding data to the creation of publication-ready maps with R.2 Geocoding and mapping data with R instead of a web or GIS application brings the general advantages of using a programming language in analyzing and visualizing data. With R, you can write the code once and use it over and over, while also providing a record of all your steps in the creation of a map.3

This post will merely scratch the surface of the mapping capabilities of R and will not enter into the domain of the more complex specific geographic packages available for R.4 Instead, it will build on the dplyr and ggplot skills discussed in my brief introduction to R. The example of geocoding and mapping with R will also provide another opportunity to show the advantages of coding. In particular, geocoding is a good example of how code can simplify the workflow for entering data. Instead of dealing with separate spreadsheets to store information about the letters and geographic information, coding makes it possible to create the geographic information directly from the letters data. The code to find the longitude and latitude of locations can be saved as a R script and rerun if new data is added to ensure that the information is always kept up to date.

[Read More]

Excel vs R: A Brief Introduction to R

With examples using dplyr and ggplot

Quantitative research often begins with the humble process of counting. Historical documents are never as plentiful as a historian would wish, but counting words, material objects, court cases, etc. can lead to a better understanding of the sources and the subject under study. When beginning the process of counting, the first instinct is to open a spreadsheet. The end result might be the production of tables and charts created in the very same spreadsheet document. In this post, I want to show why this spreadsheet-centric workflow is problematic and recommend the use of a programming language such as R as an alternative for both analyzing and visualizing data. There is no doubt that the learning curve for R is much steeper than producing one or two charts in a spreadsheet. However, there are real long-term advantages to learning a dedicated data analysis tool like R. Such advice to learn a programming language can seem both daunting and vague, especially if you do not really understand what it means to code. For this reason, after discussing why it is preferable to analyze data with R instead of a spreadsheet program, this post provides a brief introduction to R, as well as an example of analysis and visualization of historical data with R.1

The draw of the spreadsheet is strong. As I first thought about ways to keep track of and analyze the thousands of letters in the Daniel van der Meulen Archive, I automatically opened up Numbers — the spreadsheet software I use most often — and started to think about what columns I would need to create to document information about the letters. Whether one uses Excel, Numbers, Google Sheets or any other spreadsheet program, the basic structure and capabilities are well known. They all provide more-or-less aesthetically pleasing ways to easily enter data, view subsets of the data, and rearrange the rows based on the values of the various columns. But, of course, spreadsheet programs are more powerful than this, because you can add in your own programatic logic into cells to combine them in seemingly endless ways and produce graphs and charts from the results. The spreadsheet, after all, was the first killer app.

With great power, there must also come great responsibility. Or, in the case of the spreadsheet, with great power there must also come great danger. The danger of the spreadsheet derives from its very structure. The mixture of data entry, analysis, and visualization makes it easy to confuse cells that contain raw data from those that are the product of analysis. The nature of defining programatic logic — such as which cells are to be added together — by mouse clicks means that a mistaken click or drag action can lead to errors or the overwriting of data. You only need to think about the dread of the moment when you go to close a spreadsheet and the program asks whether you would like to save changes. It makes you wonder. Do I want to save? What changes did I make? Because the logic in a spreadsheet is all done through mouse clicks, there is no way to effectively track what changes have been made either in one session or in the production of a chart. Excel mistakes can have wide-ranging consequences, as the controversy around the paper of Carmen Reinhart and Kenneth Rogoff on national debt made clear.2

[Read More]

My Approach to Digital Humanities

Digital humanities holds the promise of increasing the means by which scholars are able to analyze and present data. Though some sentiments about the significance of digital humanities might be overblown, there is no doubt that the more ways we have to analyze sources the better. Learning a variety of the tools that make up the rather nebulous universe of digital humanities is like learning a new language. It opens up new possibilities that were previously closed or necessitated the expertise of others. This frames digital humanities as a collection of skills rather than a means to a predetermined end. I have adopted this perspective in learning about the possibilities opened by digital humanities and working on a digital humanities project. I am hardly the first person to take these steps, but I hope that by explaining my thought process I can set a basis for future posts on digital humanities.

If there has been one guiding force in my approach to digital humanities, it is to learn skills and tools in the process of production. In a way, this is simply the application of critical thinking to how I create my scholarly work. Instead of doing things in what seems the de facto manner, I have sought to question if there is either a more efficient way or a way in which I could gain or improve my competency. The goal of efficiency was particularly significant in thinking about how to organize my research and writing (DH 1.0), while learning new skills has been more important in the production of digital humanities projects (DH 2.0). This may not be the easiest way to complete a project in the short run, but by doing things the hard way, I am looking to open up new opportunities for future projects. With this theoretical approach in mind, let me now discuss a few concrete principles of my digital humanities practices concerning text, applications, and producing digital humanities projects.

[Read More]

New kinds of Projects: DH 2.0 and Coding

In the process of learning about how I could use digital technologies to better organize my research, I quickly started to think about how I might extend these skills to produce new kinds of outputs.1 I was familiar with the concept of digital humanities, but the step from an internal process of organizing research and writing to production seemed both too nebulous and difficult. Digital humanities also seemed to concentrate on the visual. This was intriguing, but did not present itself as the most pressing need for a graduate student who was writing a dissertation on sibling relations among 16th-century merchants. It took years for me to move from what I am calling DH 1.0 to DH 2.0, to move from research methodology to making what could properly be termed digital humanities projects. This post presents an introduction to how I eventually took this step and why I decided to learn to code instead of other available solutions.2

[Read More]

Thinking about Workflow: DH 1.0

In the spring of 2011, I was in the middle of doing research for my dissertation. I had recently returned from my second extended trip to the archives in the Netherlands and Belgium and had accumulated a ton of notes. I knew that technology had drastically altered the possibilities for research, but the fundamentals of my own workflow were hardly different than they had been when I began undergrad in the early 2000s. Sure, I used the internet to watch Netflix, but the basic tools—centered on the Microsoft Office Suite—were essentially the same. One day, I gave in to the nagging feeling that I was not getting enough out of my computer, that there were better ways of doing things. I started to poke around the internet to see if I could find ways to improve my workflow. What started as a distraction soon turned into a months-long project on finding better tools for conducting research. I quickly realized that the actual process of getting work done, doing research and writing papers, is little discussed in graduate school. Early career graduate students are so busy reading, writing, and generally trying to keep up, that the process of how to do the work is easily pushed to the background. Since then, I have tried to think critically and systematically about my workflow and have often discussed this with others.

[Read More]

By Way of Introduction

In this introductory post I want to lay out the reasons for the creation of this website, to discuss some content that I will be looking to create, and to set some goals for the site. First though, I should first introduce myself. I am a historian of early modern Europe. My research investigates merchant families and the social basis of trade, politics, and religion. Specifically, I have worked in the archives of the Van der Meulens and Della Failles, two merchant families from Antwerp involved in European-wide trade at the end of the sixteenth century. I am currently working on a manuscript that argues for the significance of sibling relationships and inheritance in the development of early modern capitalism. In addition, I am working on some digital humanities projects using the two archives.

[Read More]