GeoPython conference is focused mainly on the following topics, however other Python-centric talks are welcome too.
In this presentation I am going to introduce how I used Python to compute the footprints of Images taken by drones. The vector-based footprints are helpful to see in a GIS the coverage of the Image taken and can be applied and further used to different Analysis. For instance it might be helpful to see if the Chosen ground control Points (gcps) are equally distributed over the area of interest and if the taken Pictures use These gcps or if there is an over- or under-coverage somewhere. Also the presentation Shows how Python is used to take Advantage of Standard geo-processing modules like GDAL/OGR and others as well as Standard modules like time or numpy etc. The development of the tool has been done in Jupyter Notebook. So in this presentation a lot of practical issues are going to be presented and come together: identifying a (geospatial) Problem, conceptualise a possible solution, define a good Environment how to proceed (Jupyter Notebook), decide on the development Environment (Python), look for and find the appropriate modules to avoid re-inventing the wheel (GDAL/OGR etc.) - and finally writing the code. A Brief Outlook what can be done with the resulting data (vector Polygons) in Terms of GIS-Analysis concludes the presentation. In summary this presentation is just a Report on a very practical question with a very practical solution. So if you expect an academic paper or a very fancy Approach it is not worthwhile attending the talk. But if you want to hear about how to solve an everyday Problem with Python you might want to spend those 20mins listening.
Performing GIS analysis via scripts is the ultimate way to "show your working" as each step is documented in code. Even better, the analysis can be repeated with a single command and can be updated as new data become available. The PyGRASS interface provides a simple way to run spatial analysis in GRASS and to integrate it with pre- and post-processing steps within the same script.
We show how Python and GRASS were used to calculate the mass of material erupted by two large (>10 cubic kilometres volume), pre-historic eruptions of Iceland's Hekla volcano. Firstly, Python is used to read field and lab data from CSV files into an SQLite database. Pyproj re-projects GPS locations to the local coordinate reference system. Within GRASS, Voronoi tessellation is used divide the area where the deposits are found into regions that each contain one sample data point. Maps can be exported to visualise changes in thickness. After calculating the areas of the Voronoi cells, the Python-based Pandas library can connect to the same database and multiply the deposit thickness and density by the calculated area to get the mass in each. These are summed to give the total erupted mass.
Unlike traditional methods, this is an entirely objective and data-driven way to calculate the erupted mass. The advantages and disadvantages of this, and the best ways to incorporate manual steps where required will be discussed.
This tutorial is an introduction to geospatial data analysis in Python, with a focus on tabular vector data using GeoPandas. The content focuses on introducing the participants to the different libraries to work with geospatial data and will cover munging geo-data and exploring relations over space. This includes importing data in different formats (e.g. shapefile, GeoJSON), visualizing, combining and tidying them up for analysis, and will use libraries such as pandas, geopandas, shapely, pyproj, matplotlib, cartopy, ...
The tutorial will cover the following topics, each of them using Jupyter notebooks and hands-on exercises with real-world data:
The goal of GeoPandas is to make working with geospatial vector data in python easier. GeoPandas (https://github.com/geopandas/geopandas) extends the pandas data analysis library to work with geographic objects and spatial operations.
Pandas is a package for handling and analysing tabular data, and one of the drivers of the popularity of Python for data science. GeoPandas combines the capabilities of pandas and shapely (python interface to the GEOS librabry), providing geospatial operations in pandas and a high-level and performant interface to multiple geometries to shapely. It combines the power of whole ecosystem of geo tools by building upon the capabilities of many other libraries including fiona (reading/writing data with GDAL), pyproj (projections), rtree (spatial index), ... Further, by working together with Dask, it can also be used to perform geospatial analyses in parallel on multiple cores or distributed across a cluster. GeoPandas enables you to easily do operations in python that would otherwise require a spatial database such as PostGIS.
The standard US keyboard is ubiquitous and is reflected in most national keyboard layouts. It is suitable to type Python and other source code, but doesn't offer all the diacritics needed in German, French, Czech, Hungarian and many other European languages.
I'll show you how I am happily typing in several languages on a single standard US keyboard layout. And I'll show you how Python handles these Unicode characters.
Recurrent Neural Networks (RNNs) have become famous over time due to their property of retaining internal memory. These neural nets are widely used in recognizing patterns in sequences of data, like numerical timer series data, images, handwritten text, spoken words, genome sequences, and much more. Since these nets possess memory, there is a certain analogy that we can make to the human brain in order to learn how RNNs work. RNNs can be thought of as a network of neurons with feedback connections, unlike feedforward connections which exist in other types of Artificial Neural Networks.
The flow of talk will be as follows:
If you’ve spent much time writing (or debugging) Python performance problems, you’ve probably had a hard time managing memory with its limited language support.
In this talk, we venture deep into the belly of the Rust Language to uncover the secret incantations for building high performance and memory safe Python extensions using Rust.
Rust has a lot to offer in terms of safety and performance for high-level programming languages such Python, Ruby, Js and more with its easy Foreign Function Interface capabilities which enable developers to easily develop bindings for foreign code.
Project goal: To find a specific rock climbing area in Kenya. This area is mentioned in old descriptions but not currently known of. In order to find it satellite pictures of an area of roughly 400 by 400 km have to be scanned / evaluated. The datasource: GoogleMaps and handlabelling How: By collecting and labelling satellite pictures of similar climbing areas and non climbing areas and training a variety of network architectures to classify those pictures. The classifier is then used to go through the area and mark / store all possible locations. Why Python? The flask micro framework provides a quick-to-use way of collecting and labelling data. The MongoDB Python Driver enables the easy usage of a document database. Finally the user friendliness of machine learning in Python, specifically many available pre-trained deep learning models, different libraries, as well as a big codebase.
Good practices regarding database versioning and migration are not so easy to handle in a CVS code management system. Initial development is easy, using pure git, and sometimes some meta SQL generation scripts. But when it comes to maintaining databases already in production, good practices differ a lot since SQL patches cannot be handled the same way as git diffs.
PUM (https://github.com/opengisch/pum) is a python program that can be used via command line or directly from another python program. Similarly to other database migration management tools like Flyway-db or Liquibase, PUM is based on metadata tables. It permits to manage the upgrade of a PostgreSQL database using delta files. Before upgrading the production database, PUM allows to easily duplicate the database on the fly to test if the application of the delta files generates the desired result. Only for an affirmative answer, the migration is performed on the production database.
PUM has been developed to solve issues encountered in the QWAT and QGEP projects, which are open source Geographic Information System for network management based on QGIS. QWAT already had a dedicated migration tool, allowing to both work on the data model using git and use delta file for migrations. QGEP needed something similar so it was decided to make a more generic, yet simple tool, to handle both.
There is a compelling need for more productive design and construction engineering procedures for civil engineering projects. Many engineering procedures are still carried out manually, typically using spreadsheets and CAD, are poorly documented and insufficiently checked, and are not suitable for automation or reuse. Much emphasis is being put on Building Information Models (BIM) to digitize design and construction. BIM software is however difficult and time-consuming to apply to discipline-specific engineering procedures. We focus on using the Open Source geospatial ecosystem, using input and output data formats which can be integrated into available BIM models, as we believe that this ecosystem can be advantageous for engineering procedures.
Our previous proof-of-concept work explored "Analyzing Spatial Data Along Tunnels" and "Organizing Geotechnical Spatial Data" for design integration in geotechnical and tunnel engineering. The work included data handling, calculations, a geospatial platform and BIM attribute coding (www.geopython.net/pub/JournalofGeoPython22017.pdf).
We have developed the proof-of-concept work into a decision support system (DSS) for engineering, first focusing on hydropower pressure tunnels. A variety of design and construction monitoring procedures are under development.
The development of engineering procedures is done using Python in Jupyter Notebooks, accessible to design and construction engineers. Jupyter Notebooks provide full documentation of each engineering procedure as design reports and include design criteria, input data and a method statement. Example Notebooks will be presented.
The App will access the engineering procedures developed in Python through an API exposed by a Python microserver (Flask) using a defined data structure. The API will be developed incrementally following initial development in Jupyter Notebooks.
Our overall goal is the supervised automation of design, project management, construction monitoring and operations processes for civil engineering, using programmed computational methods, aimed at a substantial improvement of productivity for routine tasks and at decision support for project-specific engineering challenges.
The MesoHABSIM model was created to offer the tool for the study of the spatio-temporal variability of fluvial habitats available for fauna according to the flow rate and the morphology of the watercourse. Operators initially had three plugins developed ad-hoc for the QGIS suite (with an exception for the field module that was also developed with ESRI ArcPad libraries). The MesoHABSIM milestone has expected two major releases. The first, which is described in this event, is based on two QGIS plugins instead of the three original ones. To characterize this release, the presence of a service layer accessible through REST API that hide a database. The APIs have been developed thanks to the Django REST Framework and are made available for loading and validating data and subsequent processing. Modules such as Pandas and GeoPandas have been introduced in order to make the whole process consistent with the Python framework. The second step, for the back-end part, involves the replacement of R with scipy and scikit-learn and the simultaneous migration to PostgreSQL/PostGIS from sqlite in addition to the adoption of Django REST Framework GIS. Operators are thus equipped with a mobile application to be used off-line to replace the current module developed for QGIS and ESRI ArcPad. Saas or not Saas? The ambition of MesoHABSIM is to be able to offer its users a simple, reliable tool that allows to effectively apply the processing without having to bear any hardware and software charges other than pure field equipment and a browser.
Mosquito Alert is a cooperative citizen science observatory coordinated by different public research institutions. Its main objective is to fight against the tiger mosquito and the yellow fever mosquito expansions, two invasive species vectors of global diseases like Zika, Dengue and Chikungunya.
With the Mosquito Alert app anyone can report a possible finding of tiger mosquito or yellow fever mosquito and their breeding places on the public road by sending a photo. This information complements the scientific work and allows public health managers to use it to monitor and control the spread of mosquitoes in neighborhoods and cities.
The talk will include an introductory description of the project and the key software involved: citizen app, experts app and public map. The main part of the talk will be around the public map, with a special focus on the API used to feed it with data. The map API has been developed with Python and Django framework. GeoDjango and Django REST framework have not been used to ensure minimal requirements. As our first experience with Python we will share our learning experience: going from raw SQLs to Model querysets, from long complex (and WET) views to Class structured (and DRY) libraries, ...
“It must be cloud based and feel free to choose whatever technologies you want”. This was more or less how the cantonal project leader ask me to start this project. The Cadastre of public-law restrictions on landownership (PLR Cadastre) is Swiss official project aiming to provide information about the most important public law restrictions on landownership. Since this project requires complex and massive geoprocessing, FME has been intensively used. This ETL solftware is considered to be the Swiss army knife for data and is widely used in the geospatial field. Despite being really complete, FME alone would not have been able to achieve all the initial prerequisites. The PLR Cadastre relies on various technologies such as FME server, ArcGIS online, Amazon RDS - API gateway and lambda, flask. Python plays and an essential role in almost all of this elements. My presentation will show how python helped to enhance FME capabilities and how it has been integrated in every stages of this project components.
In a river reach, The Flow Duration Curve (FDC) provides an estimation of water resources availability and is usually the representation adopted to quantify the availability and variability of water for hydropower potential production. In the absence of local discharge measurements, the FDC can only be estimated with regional statistical models.
A regional model is used here for prediction of FDCs at ungauged basins in the North-Western Italy by means of a procedure proposed in 2014 (Masoero et al., IAHS) in which the analytical FDC can be represented by a Burr distribution curve, whose parameters can be related to a set of topographic, climatic, land use and vegetation descriptors representative of the basin scale. This regional procedure requires extensive GIS calculations to evaluate the geomorphological and climatic characteristics of a basin, using pre-determined raster maps and a digital terrain model. In a first step, in order to obtain the above-mentioned parameters, we developed two python scripts for the QGis Processing Toolbox, using GRASS algorithms for the automation of spatial analyses. Afterwards, with the aim to provide access to GIS data and functionality over the internet through standard internet protocols, two WPS (Web Processing Service) procedures have been implemented, accessible both by web browser and through the use of software desktop GIS such as QGIS. The use of scripts through WPS allows users to access calculations independently of the underlying software and data does not need to be housed locally (client side) but are and maintained by the hosting entity. Moreover, loading times are faster than client-side scripting. For sharing these procedures, a web platform was developed, with free and open-source software, using: PyWPS to set up the WPS processes using GRASS GIS as a backend to access all the geoprocessing functionalities; GisClient3 to build the WebGis (accessible through client browsers); Apache as web server. GisClient3 is an interesting web authoring tool configurator for PostGIS and MapServer that enables both to build Mapfiles and to provide OpenLayers maps.
By creating hydraulic simulations of urban wastewater networks, engineers can prevent flooding, reduce pollution and prepare for the future. This talk describes development of Python software to automate the generation of drain areas for modelling, from the initial idea to solution - distilling years of engineering experience. Work that would have taken weeks is now completed in less than half an hour. This has required the development of innovative techniques to solve complex problems and also to take into account interplay between multiple datasets. Thousands of drain areas are created with this software. Each area is fully characterised according to many factors including building geometry, network topology and geology. Automatic suggestions are also made for future optimisation of the network. It is currently being used for projects ranging in size from small villages to major cities.
Have you ever tried to define and process complex workflows for data processing? If the answer is yes, you might have struggled to find the right framework for that. You’ve probably came across Celery - popular task flow management for Python. Celery is great, but it does not provide enough flexibility and dynamic features needed badly in complex flows. As we discovered all the limitations, we decided to implement Selinon.
Selinon is a task flow manager that is built on top of popular Celery distributed task queue. Selinon enhances Celery task flow management and allows you to create and model task flows in your distributed environment that can dynamically change behavior based on computed results in your cluster that can be orchestrated using orchestration tools such as Kubernetes or OpenShift. Task flow configuration is done in simple YAML configuration files. Selinon also offers some advanced features such as automatically resolving tasks that need to be executed in case of selective task runs, automatic tracing mechanism, integration with Sentry monitoring or support of changes in your task flows on redeployment and many others.
Calculating the environmental impacts of goods and services is difficult, because their supply chains span the industrial world. Life cycle assessment (LCA) is the most popular methodology for such calculations, but closed-source commercial LCA software is rigid and slow. Over the last ten years, I have developed Brightway, an Python-base open source framework for LCA; as Brightway has developed a community following, it has been extended to included site-dependent assessments, which matches maps of where emissions happen to the impact of different kinds of emissions in space; a library for consistent topological base data and calculations; system models, which describe how to turn raw data into a linked supply chain network; data manipulation and model integration; and the storage and use of uncertainty distributions and real-world statistical data. In this talk, I describe the development of these tools, including the advantages of using Python, and then briefly describe how such tools can be applied in an example of future cars and airplanes.
Geography is defined as "the study of the physical features of the earth and its atmosphere, and of human activity as it affects and is affected by these, including the distribution of populations and resources, land use, and industries" (Oxford Dictionary). What is the importance of geography in today's digital age? What are geospatial technologies and how to they bridge the connection between traditional geography and modern technology? In this talk, we will explain the importance of "where", how technology is rapidly improving the geospatial world, and how we can use maps to communicate with our peers, students, and clients. Come and discover your inner geographer!
The common thinking for getting more performance out of your geoprocessing is to use expensive hardware designed for use in servers. After being given one of these big, expensive machines to use, I noticed that workflows I ran on an old laptop were running slower on this machine. To discover why this was, I tested a Python geoprocessing script on as much different hardware as I could. My results are in, and they're here to save you time and money!
How DeepLearning, and semantic segmentation, can be an efficient way to detect and spot inconsistency in an existing dataset ? OpenStreetMap dataset took as an use case. DataQuality is a must, but a gageure. And any technique on any help to improve DataQuality is then more than welcome.
Machine and DeepLearning can succeed to tackle some old issues, in a far more convenient and efficient way than ever before... For instance DeepLearning, with aerial imagery semantic segmentation can improve features detection ability and allow us to spot dataset inconsistencies.
In this presentation we will focus on how an OpenStreetMap subset dataset (for instance roads and buildings on an area), can be evaluated to produce quality metric.
We wanna focus on:
Deep Learning vision, and specific Satellite imagery considerations (high and lower resolutions, multispectral dimensions, dataset aggregation...)
How to qualify a good enough labelled DataSet (to allow supervised learning)
FOSS4G (PostGIS and Grass) integration with Python ML/DL framework
(with NumPy as an interoperability format)
Concrete solution for efficient treatments for wide coverages
My goal is to inspire developers by showing how an app can be feature boosted with simple machine learning API calls. It is a lot of fun to have attendees take their smartphones and participate altogether in a live demo.
There are many choices when looking for the right solution to manage and deploy our applications. We, as developers, are often overwhelmed with all of the options to choose from. One of our many options is Kubernetes. I’ve recently dived into deploying and managing an application on Kubernetes. In this talk, I will take you through my journey and explain the advantages and disadvantages of Kubernetes. - What is Docker - What is Kubernetes - Dockering an application - Using Kubernetes to deploy a containerized application - Pros and cons of Kubernetes - Common pitfalls
By the end of this talk, you will have the resources and tools to determine if this is the right solution for you and how you can get your application deployed!
An action is something that happens when you click on a feature. It can add a lot of extra functionality to your map, allowing you, for example, to retrieve additional information or modifying an object. Assigning actions can add a whole new dimension to your map! Actions can be written in a simplified QGIS syntax or in Python to allow full interaction with the PyQGIS API.
The python macros for projects allow executing code for a whole project. The three available macros (openProject, saveProject and closeProject) execute your custom functions whenever they are called. Furthermore, thanks to the signal/slot architecture of QGIS, setting up Qt connections in the openProject macro will result in your custom function to be able to react to map events during your whole work session.
Python is a great language for dealing with spatial data and solving geo-related problems, however, it is not uncommon to find that tasks can be long running or numerous and would benefit from asynchronous execution. Celery is a task queuing library which provides a framework for solving this very problem.
In our workshop you will write a simple program using Celery and RabbitMQ to process geometries (with Shapely) and write to a database (PostGIS) asynchronously. We will demonstrate some of the cool features of Celery including out of the box monitoring tools, failure handling and (hopefully) scalable distributed processing.
This workshop will enage with current best methods in spatial data science, teaching participants methods to leverage information about location and relation in their analysis of data. We will begin by covering basics of how to work with spatial data, thinking geographically, and how the representation of spatial data can be encoded in to modelling methods. Then, we will examine a few case studies, such as the detection of spatial outliers -- areas where some things do not fit into their geographic landscape -- also known as "hotspots" or spatial clusters. We will examine how to include information about arrangement and proximity into more interesting, relevant, and powerful model components, not just nearest-neighbors regressions. A joint offering from the Center for Geospatial Sciences at the University of Riverside, the Quantitative Spatial Science Lab at the University of Bristol, and the Center for Spatial Data Science at the University of Chicago, this workshop will provide attendees with a strong sense of the current state of the statistical art in the analysis and understanding of spatial data.
Geospatial data analysis and processing often requires us to run series of intermediate tasks repetitively. An example where we need to invoke the same methods on different data sets. A way to automate, as well as potentially to simplify these tasks is through scripting, which can be achieved using Python within QGIS, the popular open source GIS.
This workshop we start with an overview of QGIS, then it moves on to introduce Python programming in QGIS (PyQGIS). Then we discuss QGIS' Processing framework and how it allows the use of pythonic scripting, before we’re going in-depth to explain how to automate tasks using Python scripts, which would otherwise require repetition of the same process/running of code. Afterwards, we show you how you can use and create custom scripts on QGIS with our own hands-on exercises. We’re wrapping the course on how one can use the Processing framework and how one can create pythonic scripts in QGIS or otherwise, to help automating tasks for future use.
This course makes use of QGIS 3 and incorporates certain GIS knowledge, but still keeps it friendly enough for beginners or first-timers. So, we welcome both enthusiasts who are eager to learn how to script in Python, as well as those interested in learning more about geospatial data analysis and processing.
I will present on-going work (my attempts) to analyze and visualize geospatial and temporal variation in pleural mesothelioma data in the UK.
GeoPySpark is a Python library for processing large amounts of geospatial data in a distributed environment. A binding of GeoTrellis, a Scala geospatial library, GeoPySpark allows users to take advantage of the speed and scalability of Apache Spark while working with a Python interface.
This talk will discuss GeoPySpark itself as well as its features via an example use case. The first part of the presentation will be a general overview of how GeoPySpark works and its features. In the second part of the talk we will develop an example use case for GeoPySpark in Jupyter notebook that will utilize a cluster on Amazon’s EMR.
This notebook will generate a friction surface that will enable a cost distance surface calculation between two travel waypoints. In order to visualize the intermediate steps and final result, this example’s Jupyter Notebook will be a fork of Kitware's Geonotebook. This application allows for the layers to be displayed on a map that accompanies the notebook.
Note: Due to length of the of the talk and how much data will worked with, all of the following operations will be carried out before hand.
When producing the friction layer certain variables will need to be considered. These are the following elements that will be taken into account when producing the friction layer: elevation, land cover, hydrology, roads, and trails. All of these datasets will cover the lower 48 states of the United States of America. See here for more information on the source and format of the datasets.
Once the data has been read in and formatted, the next phase will be to calculate the friction layer. The following steps will be performed in order to produce this layer:
slopeof the NED layer
slopelayer, derive walking speeds using Tobler's hiking function. This will become the base friction layer.
orcfile and will be rasterized into a single layer.
Each step will have its data displayed on a map to better visualize the changes to the friction surface as these factors are taken into account.
The cost distance layer will be calculated using the final friction layer and the two points of interest. Once we have visualized the cost distance layer, the final step of this demo is to save the friction layer to a remote backend for future use. In this case, the layer will be saved to Amazon's S3 service.
Sources and formats of the data:
After previous studies had shown that 80% of all decisions in business and private life are based on geodata. Python is the one commonly used programming language for the processing of geographical data. Today, however, a huge challenge is the explosion of data volumes. This talk shows the possibilities of using Python for geodata processing, dealing with big data, data handling and analysis, and cloud computing.
|09:00 - 09:20||Opening GeoPython 2018
Martin Christen, FHNW
|09:30 - 12:30||Workshop 3h
Spatial Data Science with PyData
Levi John Wolf & Sergio Rey, (University of Bristol & University of Chicago) & University of California, Riverside
(Coffee Break: 10:30-11:00)
PyQGIS Layer actions and project macros
Marco Bernasocchi, OPENGIS.ch GmbH
(Coffee Break: 10:30-11:00)
|12:30 - 13:30||Lunch Break|
|13:30 - 15:30||Workshop 2h:
Task queues with Celery and RabbitMQ
Matt Walsh, thinkWhere
QGIS Processing Framework: Automating Tasks with Python
Stefan Keller, HSR Geometa Lab
|15:30 - 16:00||Coffee Break|
|16:00 - 18:00||Workshop 2h
Introduction to geospatial data analysis with GeoPandas and the PyData stack
Joris Van den Bossche, Université Paris-Saclay Center for Data Science, INRIA
Introduction to Spatial Data Processing using FME and Python
Régis Longchamp, INSER SA
|18:00 - 21:00||Ice Breaker Party|
|Room TBA||Room TBA|
|09:00 - 9:15||Opening Day 2
|09:15 - 10:30||Session: Web Services I
Why Kubernetes: Finding the Best Solution for Your Needs, Rizchel Dayao, IBM
Using PyWPS for Water Resources Quantification in Mountain Basins, Susanna Grasso, Polytechnic of Torino, Department of Environment, Land and Infrastructure Engineering
|10:30 - 11:00||Coffee Break & Lightning Talk Registration|
|11:00 - 12:40||Session: GIS & Mapping I
Who Needs Geography in Today's Digital World?, Corryn L Smith, Northern Arizona University
Kroměříž, Győr, Łomża, Mâcon - Around Europe on a Single Keyboard, Miroslav Šedivý
Automated Drain Area Creation for Municipal Hydraulic Simulations, Robin Dainton, Hunziker-Betatech AG
Using Python for regionalized sustainability assessment, Chris Mutel, PSI
|Session: Machine Learning I
Case Study - how to use big-data machine learning to predict future delays and bottlenecks in public transport, Ture Friese, Computas AS
Getting an Edge with Network Analysis in Python, Alon Nir
Understanding and Implementing Recurrent Neural Network using Python, Anmol Krishan Sachdeva, University of Bristol, United Kingdom
|12:40 - 13:40||Lunch Break|
|13:40 - 15:20||Session: Spatial Databases
Postgres migrations using Python, Mario Baranzini, OPENGIS.ch GmbH
PLR (ÖREB) Cadastre Kanton Wallis, Régis Longchamp, Inser SA
Reproducible geoscience: Volcanic eruption mass calculations using Python, GRASS7 and Pandas, John A Stevenson, British Geological Survey
|Session: Machine Learning II
Boost your app with Machine Learning API, Laurent Picard, Google
Deep Learning on Spatial Imagery to improve GeoSpatial DataSet Quality, an OSM use case, Olivier Courtin, DataPink
Classification of geological features on Satellite images, Johannes Oos
|15:20 - 16:00||Coffee Break|
|16:00 - 17:15||Session: GIS & Mapping II
Computing drone-based image footprint with Python, Hans-Jörg Stark, SBB
MesoHABSIM move to the Cloud, Erik Tiengo, GisUp
|17:15 - 18:15||Lightning Talks
5 minutes talks, registered at the conference
|19:00 - 22:00||Conference Dinner|
|09:00||Opening Day 3|
|09:00 - 10:25||Session: Web Services II
Selinon - dynamic distributed task flows, Fridolín Pokorný, Red Hat
Pumping up Python modules using Rust, Vigneshwer Dhinakaran, Mozilla TechSpeaker
|10:25 - 11:00||Coffee Break|
|11:00 - 12:40||Session: Geovisualization
Developing Automated Geospatial Procedures for Civil Engineering, Joseph Kaelin, Katharina Kaelin, Alan Hodgkinson, Philippe Nater
The Mosquito Alert map implementation: a use case in citizen science, Marc Compte, GIS Service - University of Girona
Approaching geovisualization and remote sensing with GeoViews, Giacomo Debidda
Mapping Mesothelioma, Carl Reynolds, Imperial College London
|12:40 - 13:15||Lunch Break|
|13:15 - 14:15||Lightning Talks
5 minutes talks, registered at the conference
|14:15 - 15:30||Session: Data Processing
GeoPandas: easy, fast and scalable geospatial analysis in Python, Joris Van den Bossche, Université Paris-Saclay Center for Data Science, INRIA
Everything you've been told about geoprocessing performance is wrong, Aaron Styles, Geoplex
Introducing GeoPySpark, a Big Data GeoSpatial Library, Jacob Bouffard, Azavea
|15:30 - 15:50||Closing Session|
Welcome to Basel! To make your stay as pleasant as possible, the city of Basel has a special “upgrade” for you: Every hotel guest in Basel receives a BASELCARD when checking in. You can use it to travel free of charge on public transport, to access the free guest WiFi and to benefit from attractive cultural and leisure activities. We wish you an unforgettable stay in Switzerland’s cultural capital.
If you have any questions contact us:
The GeoPython Organisation Team: