Tuesday, September 8, 2015

Portable API Driven Or At Least JSON Driven Interactive Visualization Tooling

As I am working on the API and JSON driven visualization strategy for my Adopta.Agency open data work, I saw cloud monitoring platform Librato, publish their new "Space" interface as a Heroku add-on. I like dashboards and visualization tooling that can live on multiple platforms, and engineered to be as portable and deployable as possible.

In a perfect world, infographics would be done using D3.js, and would all show their homework, with JSON or API definitions supporting any visualizations. All of my Adopta.Agency projects will eventually possess a simple, embeddable, D3.js visualization layer that can be published anywhere. Each project will have its JSON localized in the publicly available Github repository, and be explorable via any browser using Github Pages.

The Librato approach reminded me that I'd also like to see modular, containerized versions of more advanced tooling, dashboards, and visualizations around some projects. This would only apply in scenarios where a little more compute is needed behind the visualizations, that could be done with simple D3.js + JSON, hosted on Github. Essentially giving me two grades of portable visualization deployment: light and heavy duty. I like the idea that it could be a native add-on, whereever you are deploying an open API or dataset.

I still have a lot of work to do when it comes to the light duty blueprint of JSON + D3.js, and API + D3.js, to support Adopta.Agency. I will focus on this, but keep in mind doing modular cloud deployments using Docker and Heroku for the datasets that require more heavy data lifting.



from http://ift.tt/1UFBYUn

Saturday, September 5, 2015

Pushing Forward Algorithmic Transparency When It Comes To The Concept Of Surge Pricing

I've been fascinated by the idea of surge pricing, since Uber introduced the concept to me. I'm not interested in it because of what it will do for my business, I'm interested because of what it will do for / to business. Also I'm concerned what this will do the layers of our society who can't afford, and aren't able to keep up with this algorithmic meritocracy we are assembling.

While listening to my podcasts the other day, I learned that Gogo Inflight wifi also uses surge pricing, which is why some flights are more expensive than others. I long suspected they used some sort of algorithm for figuring out their pricing, because some flights I'm spending $10.00 for the flight, and others I m paying $30.00. Obviously they are in search the sweet spot, to make the most money off business travelers looking to get their fix.

Algorithmic transparency is something I'm very interested in, and something I feel APIs have a huge role to play in helping us make sense of just exactly how companies are structuring their cost structures. This is right up my alley, and something I will add to my monitoring, searching for stories that mention surge pricing, and startups who wield this publicly as part of their strategy, as well as those who work to keep it a secret.

This is where my research starts going beyond just APIs, but it is also an area I hope to influence with some API way of thinking. We'll see where it all goes, hopefully by tuning in early, I can help steer some thinking when it comes to businesses approaching surge pricing (or not). 



from http://ift.tt/1JI4ZK1

Being a Data Janitor and Cleaning Up Data Portability Vomit

As I work through the XML, tab & comma separated, and spreadsheet strewn landscape of federal government data as part of my Adopta.Agency work, I'm constantly reminded of how the data being published is often retribution, more than it is anything of actual use. Most of what I find, despite much of it being part of some sort of "open data" or "data portability" mandate is not actually meant to be usable by its publishers.

In between the cracks of my government open data work, I'm also dealing with the portability of my own digital legacy, and working to migrate exported Evernote notes into my system, as well as legacy Tweets from my Twitter archive download. While the communicated intent of these exports from Evernote and Twitter may be about data portability, like the government data, they really do not give a shit about you actually doing anything with the data.

The requests for software as a service providers, and government agencies to produce open data versions of our own user or public data, has upset the gate-keepers, resulting in what I see as passive aggressive data portability vomit--here you go, put that to use!! A president mandating that us database administrators open up our resources, and give up our power? Ha! The users who helped us grow into a successful startup, actually want a copy of their data? Ha! Fuck you! Take that!

This is why there is so much data janitorial work today, because many of us are playing the role of janitor in the elementary school that is information technology (IT), and constantly coming across the data portability vomit that the gatekeepers of legacy IT power structures (1st and 2nd graders), and the 2.0 silicon valley version (3rd and 4th graders), produce. You see, they don't actually want us to be successful, and this is one of the ways they protect the power they perceive they possess.



from http://ift.tt/1KxseMw