TopoJSON

Published on February 14, 2013

Last week I updated Shape Escape to convert shapefiles to GeoJSON and TopoJSON, in the hopes of making it easier for developers to quickly get that web-unfriendly shapefile into some clientside useful vector format.

First, it's worth noting that TopoJSON has got a bunch of webmap folks excited. And rightfully so. Many people seem excited because topology (although I haven't seen many non-demo sites taking advantage of this yet). And also because it advertises a more compact representation of data. Clearly if the topology it makes available will help your visualization, TopoJSON is the way to go. But what's this about a smaller representation?

As noted on the wiki page, 'TopoJSON can also be more efficient to render since shared control points need only be projected once. To further reduce file size, TopoJSON uses fixed-precision delta-encoding for integer coordinates rather than floats. This eliminates the need to round the precision of coordinate values (e.g., LilJSON), without sacrificing accuracy'. Sounds good! But what it doesn't mention explicitly is that in order to be efficient about the topology (shared vertices) and the deltas, the coordinate pairs undergo quantization.

No idea what that last paragraph was all about? Delta encoding remind you of an airline? Quantization not on your daily word calendar?

Regarding the delta encoding of coordinate pairs, it's a great way to save bits by referencing the location of vertices as a relative offset -- for example encoded polylines use the technique to great effect (except per geometry, ignoring topology). Version control for your software (think diffs) works similarly. Anyway, that accounts for some great space savings; excellent.

So what about the quantization part? One way to think of it is the geometries are simplified in a "snap to grid" fashion, which means the size of the grid you're snapping to provides you with a tradeoff between compactness and accuracy. The more course your grid, the more vertices may get snapped to single location (and the further away they may be moved from their original location). Since rounding your original coordinates (e.g. lopping off significant digits from your lat/lng) in essence does the same thing, the quantization part of the conversion does cause some loss of accuracy. So what is the accuracy loss? Even if it's not discernable for a non-zoomable map, what does it mean for the traditional slippy-map developer?

To help illustrate, ShpEscape outputs TopoJSON with a few different size grids (or as the documentation refers to the -q option, 'max. differentiable points in one dimension', such that you can select from a variety of output options (figure 1) for each upload.

Ok great, but which option should you choose for your mapping needs?

A quick experiment: I uploaded some Natural Earth country borders, and the US Census California Counties to ShpEscape. There's a big difference in these images, and my conclusion is if you want to use TopoJSON in a slippy map, you should consider how far your users may zoom in, the importance of not losing detail, and probably avoid the default 10k quantize parameter.

Additionally, if you do decide to stick with GeoJSON, don't feel too bad: The 90% savings that first jumps out at you might not end up as big as you think. Below, for example, are the numbers (in kB) for the CA Counties:


GeoJSON 6,322
TopoJSON [default] 454
TopoJSON [100M] 1,539
GeoJSON [gzip] 1,418
TopoJSON [100M gzip] 556


TopoJSON still the clear winner in this experiment, at a bit over 1/3 the size when sent over the wire. But there's still some cost (for example the additional topojson.js library), and I also didn't experiment with liljson which could potentially save some space on the GeoJSON side.

Finally, don't take the above figures too seriously -- YMMV with different datasets; this one for example has polygons with shared borders, and a relatively even distribution. Instead, be thankful we have a new awesome option for sending vectors around, and use it with care.

TileMill

Published on June 12, 2011

I've played briefly with TileMill before, but after learning more about the advances that Development Seed is putting into their MapBox stack (such as Node.js integration with Mapnik, utfgrid and more) I realized it was time to sit down and play with it for real.

My main interest right now is in getting a feel for the CSS-like syntax used in Carto, but as long as I needed to set up a full instance of TileMill to play with I figured I might as well make it into an Amazon EC2 AMI, so anyone can easily boot up an instance and get started.

Setup was extremely simple (thanks to Dane telling me how to cut and paste their very straightforward install instructions; not that he noticed I was installing everything to /tmp). After some much needed sleep and attempt #2 here on the plane, you can now go to the Amazon AWS console and load up ami-56ae563f, or search for tilemill, and you're done. Just wait for the instance to start up, and TileMill should be running on port 80 (if you want to get to the tilemill console, ssh in with the keypair you associated with the instance, and type in:

sudo su
screen -r tilemill

Shape Escape

Published on December 20, 2010

Well it's been a while, so just a quick note: Since the last post, I started working full time for Google. And with that out of the way, here's a post on how and why I made shpescape.com, which lets you upload shapefiles to Google Fusion Tables.

Why shpescape?

Google Fusion tables makes it easy to import and visualize data from spreadsheets and KML, and while it has increasingly robust spatial support it does not currently let you upload shapefiles directly. And since shapefiles are still incredibly common in the wild, I thought I'd make a quick tool to let people upload shapefiles to Fusion Tables.

Which platform?

I thought I'd try Google App Engine to avoid any hosting costs (given this will likely not be an extremely popular website), but while there's a decent shapefile reader or two for python, there's not a lot of support for things like reprojecting and other geometry manipulation without additional c++ libraries that App Engine won't run. So I just went for a simple GeoDjango app.

Authentication

I used my colleague Kathryn's Fusion Tables python client to handle the authentication (OAuth). And I decided against having OpenID in the mix as well for actually associating an account with various uploads. The downside is that you can't log in and view your previous uploads. But you can always go to the main Fusion Tables page to see all your tables, and the upside was one less thing for me to consider (for example, if you are logged in with multiple accounts in the same browser, OAuth does not return which account gave permissions for). [Edit: It turns out you can actually request the email address of an authorized user using the scope noted at http://sites.google.com/site/oauthgoog/Home/emaildisplayscope]

Handling a Shapefile Upload

I used a simple fork of Dane Springmeyer's django-shapes app to handle the shapefile import. The customizations let users upload a zipfile that has a shapefile in a subfolder, and/or multiple shapefiles in a single zip. I had never really noticed shapefiles being zipped up this way, and it really surprised me how common these scenarios are with shapefiles from various US counties and other agencies -- my first 3 test users all had their uploads fail until I added this. After the upload is verified as valid, it creates a shapeUpload object, which is processed separately so the end user can view a reloading page with updated status.

Processing the Upload

My initial attempt was pretty straightforward:
  • Attempt to get the projection of the shapefile (from the .prj)
  • For each feature, get it's KML and attributes
  • Upload 'em to Fusion Tables, ensuring a max of a few hundred, or <1MB, at a time (the API can handle at most 500 rows and 1MB per POST)

Additional Features

Next up, I started adding a few extra bits, which led to an import method begging for a refactor.
  • Simplification for KML over 1M characters long (which is the max characters allowed by Fusion Tables per cell)
  • Process/Upload 10k rows at a time (so we don't use too much memory for very large shapefiles)
  • Added numeric styling columns for string fields that don't have too many unique values (Fusion Tables only allows robust styling like gradients and buckets on numeric fields)
  • Allow users to specify some additional geometry columns:
    • Simplified Geometry
    • Centroid (only works for polygon shapefiles)
    • Buffered Centroid (so you can apply the more robust polygon styling rules on the 'centroid')

Finishing up

This whole project was a pretty quick attempt at what I hope is a useful solution to a common problem, so any comments on how to make it better are appreciated. And if you want to see how it all works in more detail, I also open sourced the code. Enjoy!

Oceans Showcase

Published on February 05, 2010

Last night at the San Francisco Ocean Film Festival Google launched the Oceans Showcase, which is the second contract I've had the opportunity to work on with them. The showcase is a set of Google Earth based Tours for playing in a webpage (plugin required) or via download.

The Ocean Film Festival is going on until Sunday, and has a really interesting lineup - check it out if you're in the Bay Area. Either way, take a peek at some of the Tours: There's some really amazing content available for the Oceans layer in Google Earth that I was totally unaware of before looking more closely.

Older