Louisville, KY: Open Data
Louisville-Jefferson County Metro Government
Population: 756,832 (2013)
Form of government: Mayor-council
Open data champion: Tim Welsh, Deputy Director of Technology
Date of interview: June 2014
What were the most important steps you took to get open data off the ground?
Tim: The most important step occurred when Mayor Greg Fischer issued an executive order in October 2013 declaring that all Metro data is ‘open’ by default. This directive provided leverage to my agency, Metro Technology Services (MTS), to work with agencies to identify and implement the processes which created Open Data datasets. The current catalog has between 30 and 40 datasets. Our process will increase the catalog size by over 10x.
While initial efforts identified physical databases and their locations, none of the sites from other cities are primarily comprised of datasets of complex data structures and their schema to their portals. We asked our agencies to provide the artifacts which drive agency decisions as candidates for Open Data datasets. As we configure the datasets for our portal, we learn how we can broaden the scope of the data from the particular artifact to more general, comprehensive sets of data. Our sense is that non-technical consumers will be interested in focused datasets, while technical consumers will be interested in general datasets. We see examples of both in other portals, and have followed this pattern.
Metro’s portal was constructed out of home grown and open source toolsets. The major Open Data portal vendors commented on the quality of its design. Our initial focus has been to use the home grown processes, and the structure of our datasets is such that they can be easily migrated to one of the vendors in the future.
How did you prioritize open data in your city?
Tim: As indicated above, Mayor Fischer set the Open Data initiative as a priority for city data. The scheduling of a given dataset’s deployment is driven by several factors, including:
- The timing related to the supply of the data acquisition methodology from the agency
- The difficulty of deriving and deploying the dataset
- Public events which could be enhanced by the publishing of the dataset
- Increased efficiency in agency workflows caused by the standardization and optimization of data extraction methodologies
- Forecasted reduction in the time required to fulfill Open Record requests
What have been the biggest challenges?
Tim: Some of the bigger challenges include:
- Overcoming agency concerns regarding the potential for the public to misinterpret data.
- Overcoming agency concerns regarding data inconsistencies when agencies Report different facts about what appear to be the same entities.
- Establishing the workflows and stakeholders within IT to make Open Data part of the DNA of the group, vs. the tendency to see Open Data as a project which will end at some point.
- Acquiring City data from systems housed and managed by vendors.
What tactics have you tried to overcome those challenges?
- Open Data datasets are not released to the public until the contributing Agency approves their form, content and metadata descriptions.
- We have asked for assistance from high ranking government officials to clear obstacles.
- We have worked with vendors to understand the most efficient ways to use vendor APIs to extract the City’s data.
- We have trained IT staff including Management, Development, Design and IT personnel who interface with Agency partners in Open Data philosophy and process so that ownership is department-wide and every implementation step can be executed by multiple individuals.
How have you proved the value for open data?
Tim: Since the initiative is new, we are just beginning to identify and calculate its value. There is an front investment in rolling out open data as staff resources, specifically, need to be dedicated to the effort. I believe that value will come from improved data collection, improved efficiencies in addressing public requests for things like FOIA requests, opportunities for citizens, entrepreneurs, etc. to use open data for productive means, and more transparent government.
What are some of your early successes?
- We are well on the way to the release of our first group of new datasets, which will triple our current count, in July.
- Agencies have provided us with over 400 datasets, which will be deployed in 3 phases.
- Several larger general purpose datasets have been identified from the implementation of targeted datasets.
- We have deeper institution knowledge about the structure and deployment of City data and what individuals have the best knowledge of specific domains.
- We have laid the political and technical groundwork for the creation of an Enterprise Data Model which can use Master Data Management techniques and vendor solutions to mine City data for new insights which will increase service levels and identify cost savings and revenue enhancement opportunities.
What has been your most successful argument for generating buy-in among the government staff or community?
Tim: Agencies spend days on some data gathering tasks which should only take minutes. The data from these tasks are almost always Open Data Catalog candidates. IT will bring the expertise to these data gathering tasks which will significantly reduce agency workload and provide a new dataset for Open Data. Open Data provides ROI for agencies because of this process and a reduction in Open Records requests.
Provide website feedback on GitHub.