Michal ZimmermannPieces of knowledge from the world of GIS.

Articles tagged with python tag

WMS Download For Future Offline Use

Using WMS in real time might easily become pain in the ass due to low connection speed or slow server response. Downloading images beforehand seems to be a reasonable choice both to avoid any slowdowns and to improve user experience when working with WMS layers.

OWSLib is a great tool to help you get images from WMS server. Code and some comments follow.

import math
import os
import random
import time
from owslib.wms import WebMapService

BOTTOM_LEFT = (-679363,-1120688)
TOP_RIGHT   = (-565171,-1042703)
SRS_WIDTH   = math.fabs(-639084 - -638825) # tile width in units of crs => 259 m
SRS_HEIGHT  = math.fabs(-1070426 - -1070273) # tile height in units of crs => 153 m
PX_WIDTH    = 977
PX_HEIGHT   = 578

FORMAT      = 'image/png'
LAYERS      = ['KN', 'RST_PK']
SIZE        = (PX_WIDTH, PX_HEIGHT)
SRS         = 'EPSG:5514'
STYLES      = ['default', 'default']
TRANSPARENT = True

DIRECTORY = 'tiles/'
SLEEP     = random.randint(0,20) # seconds

dx = math.fabs(BOTTOM_LEFT[0] - TOP_RIGHT[0]) # area width in units of crs
dy = math.fabs(BOTTOM_LEFT[1] - TOP_RIGHT[1]) # area height in units of crs

cols = int(math.ceil(dx / SRS_WIDTH)) + 1
rows = int(math.ceil(dy / SRS_HEIGHT)) + 1

counter = 0

with open('world_file.pngw', 'r') as wld_template:
    tmpl = wld_template.read()

wms = WebMapService('http://services.cuzk.cz/wms/wms.asp', version='1.1.1')

for i in xrange(0, rows):
    if not os.path.exists(DIRECTORY + str(i)):
        os.mkdir(DIRECTORY + str(i))

    for j in xrange(0, cols):
        if os.path.exists(DIRECTORY + str(i) +'/kn_' + str(i) + '_' + str(j) + '.png'):
            counter += 1
            continue

        bbox = (
            i * SRS_WIDTH + BOTTOM_LEFT[0],
            j * SRS_HEIGHT + BOTTOM_LEFT[1],
            (i + 1) * SRS_WIDTH + BOTTOM_LEFT[0],
            (j + 1) * SRS_HEIGHT + BOTTOM_LEFT[1]
        )

        img = wms.getmap(
            layers=LAYERS,
            styles=STYLES,
            srs=SRS,
            bbox=bbox,
            size=SIZE,
            format=FORMAT,
            transparent=TRANSPARENT
        )

        with open(DIRECTORY + str(i) +'/kn_' + str(i) + '_' + str(j) + '.png', 'wb') as png:
            png.write(img.read())

        with open(DIRECTORY + str(i) + '/kn_' + str(i) + '_' + str(j) + '.pngw', 'w') as wld_file:
            wld_file.write(tmpl)
            wld_file.write('\n' + str(i * SRS_WIDTH + BOTTOM_LEFT[0]))
            wld_file.write('\n' + str((j+1) * SRS_HEIGHT + BOTTOM_LEFT[1]))

        counter += 1
        print str(counter), ' out of ', str(rows * cols)
        time.sleep(SLEEP)

First, always make sure you are not violating terms of use defined by service provider. If you are not, here are the necessary steps:

  1. Define your area of interest with bottom left and top right coordinates.
  2. Calculate width of single image both in pixels and units of CRS to get the rightsized image. Note that there may be image size restrictions defined by provider (2048 × 2048 px is usually the biggest you can get).
  3. Define template world file for referencing images. OWSLib doesn’t provide world files to saved images, these have to be created by you. I recommend to use a template file for creating real world files.
  4. Be nice! Don’t overload the service. I use time.sleep() for this.
  5. Profit.

The trouble with WMS is that you can’t set an arbitrary scale you want to obtain images in (e.g. 1:1 000). It’s fairly easy to get all values needed to imitate this behavior though.

Using QGIS you can:

  1. Get bounding box of area you’re interested in.
  2. Save current view as an image (together with the world file!) and use it as a specimen for your own world files.
  3. Derive image width (CRS, pixels) from the saved image, thus getting the same zoom level you were using in QGIS.

Code given is not bulletproof, it will fail on any network error. However, if you restart it after such a crash, it checks for existing files and starts with the first missing, so you don’t have to download all the tiles again.

Migrating Geoserver And Checking For Missing Data

I’ve upgraded a handful of Geoserver installations and it has never been flawless. If you’re lucky you end up with just some layers missing, if you’re not, you’ll miss a bunch of them (together with layergroups, some stores, workspaces might screw up etc.).

But how do you check for missing data before switching to the newer version? Thanks to the REST API implemented within Geoserver, it’s rather easy.

import requests
from bs4 import BeautifulSoup
from requests.auth import HTTPBasicAuth

req = requests.get('http://example.com/geoserver/rest/layers', auth=HTTPBasicAuth('username', 'password'))

html = BeautifulSoup(req.text)
i = 0
for link in html.find_all('a'):
    i += 1
    href = link.get_text()
    print i

with open('list.txt', 'a') as f:
        f.write(href)
        f.write('\n')

We needed to migrate ~ 17,000 layers last week, and yes, we could have just shut the door and spend couple of nights checking one after another, if we were the dumbest GIS company ever.

As I wanted to make it a bit easier I wrote the simple Python script (see above) that just authenticates against Geoserver and downloads the list of layers. I actually had to do that twice - both old and new instance. A simple file comparison followed and I got a list of missing layers in less than two minutes.

If you do the same to workspaces, stores and layergroups, your chances of not losing some data after the switch are pretty high.

I guess it’s reasonable to check your maps by hand as well, but this gives you the picture of the current state of your data real quick.

Going 3D With Space Time Cube

Seeing Anita’s space-time cube back in 2013 was a moment of woooow for me. I’ve been interested in unusual ways of displaying data ever since I started studying GIS and this one was just great. How the hell did she make it?!, I thought back then.

And I asked her, we had a little e-mail conversation and that was it. I got busy and had to postpone my attemps to create that viz until I dove into my diploma thesis. So…here you go.

Recipe

What you need is:

How to make it delicious

First things first, you need to add a timestamp property to tweets you want to show (with the following Python code). created_at param is a datetime string like Sat Jun 22 21:30:42 +0000 2013 of every tweet in a loop. As a result you get a number of seconds since 1.1.1970.

def string_to_timestamp(created_at):
    """Return the timestamp from created_at object."""
    locale.setlocale(locale.LC_TIME, 'en_US.utf8')
    created_at = created_at.split(' ')
    created_at[1] = str(strptime(created_at[1], '%b').tm_mon)
    timestamp = strptime(' '.join(created_at[i] for i in [1,2,3,5]), '%m %d %H:%M:%S %Y') # returns Month Day Time Year
    return mktime(timestamp)

As you probably guess, the timestamp property is the one we’re gonna display on the vertical axis. You definitely want the tweets to be sorted chronologically in your JSON file!

#!/usr/bin/python
# -*- coding: utf-8 -*-
#avconv -i frame-%04d.png -r 25 -b 65536k  video.mp4

from peasy import PeasyCam
import json

basemap = None
tweets = []
angle = 0

def setup():
    global basemap
    global tweets

    size(1010, 605, P3D)

    data = loadJSONArray('./tweets.json')
    count = data.size()

    last = data.getJSONObject(data.size()-1).getFloat('timestamp')
    first = data.getJSONObject(0).getFloat('timestamp')

    for i in range(0, count):
        lon = data.getJSONObject(i).getJSONObject('coordinates').getJSONArray('coordinates').getFloat(0)
        lat = data.getJSONObject(i).getJSONObject('coordinates').getJSONArray('coordinates').getFloat(1)
        time = data.getJSONObject(i).getFloat('timestamp')

        x = map(lon, -19.68624620368202116, 58.92453879754536672, 0, width)
        y = map(time, first, last, 0, 500)
        z = map(lat, 16.59971950210866964, 63.68835804244784526, 0, height)

        tweets.append({'x': x, 'y': y, 'z': z})

    basemap = loadImage('basemap.png')

    cam = PeasyCam(this,53,100,-25,700)
    cam.setMinimumDistance(1)
    cam.setMaximumDistance(1500)

def draw():
    global basemap
    global tweets
    global angle

    background(0)

    # Uncomment to rotate the cube
    """if angle < 360:
        rotateY(radians(angle))
        angle += 1
    else:
        angle = 360 - angle"""

    # box definition
    stroke(150,150,150)
    strokeWeight(.5)
    noFill()
    box(1010,500,605)


    # basemap definition
    translate(-505,250,-302.5)
    rotateX(HALF_PI)
    image(basemap,0,0)

    for i in range(0, len(tweets)):
        strokeWeight(.5)
        stroke(255,255,255)
        line(tweets[i].get('x'), height-tweets[i].get('z'), tweets[i].get('y'), tweets[i].get('x'), height-tweets[i].get('z'), 0)

        strokeWeight(5)
        stroke(255,0,0)
        point(tweets[i].get('x'), height-tweets[i].get('z'), tweets[i].get('y'))

        strokeWeight(2)
        stroke(255,255,255)
        point(tweets[i].get('x'), height-tweets[i].get('z'), 0)
        lrp = map(i, 0, len(tweets), 0, 1)
        frm = color(255,0,0)
        to = color(0,0,255)
        if i < len(tweets)-1:
            strokeWeight(1)
            stroke(lerpColor(frm,to,lrp))
            line(tweets[i].get('x'), height-tweets[i].get('z'), tweets[i].get('y'), tweets[i+1].get('x'), height-tweets[i+1].get('z'), tweets[i+1].get('y'))

    # Uncomment to capture the screens
    """if frameCount > 360:
        noLoop()
    else:
        saveFrame('screens/frame-####.png')"""

You should be most interested in these lines:

x = map(lon, -19.68624620368202116, 58.92453879754536672, 0, width)
y = map(time, first, last, 0, 500)
z = map(lat, 16.59971950210866964, 63.68835804244784526, 0, height)

They define how coordinates inside the cube should be computed. As you see, x is the result of mapping longitudinal extent of our area to the width of cube, the same happens to z and latitude, and to y (but here we map time, not coordinates).

The bounding box used in those computations is the bounding box of the basemap. Interesting thing about Processing and its 3D environment is how it defines the beginning of the coordinate system. As you can see on the left, it might be slighty different from what you could expect. That’s what you need to be careful about.

How does it look