Tuesday, November 24, 2015

For A Brief Moment We All Had Swagger In The API Space

For a brief moment in the API space, we all had swagger. When it all began, we were working towards interoperability in a digital world where none of us actually wanted to work together, after being burnt SOA bad before. We all pretended to be working together, but in reality we were only operating on just a handful of available VERBS, until one person came to the table with a common approach that we could all share--something that would bring us together for a brief moment in history.

It won't work said the restafarians! It doesn't describe everything, said everyone else. However, for those who understood, it was a start. It was an approach that allowed us to easily define the value we were trying to deliver in the API space. Something that turned out to be a seed for so much more, in a seemingly rich API world, but one that in reality was toxic to anything that was actually sustainable. Somehow this environment, one individual managed to establish a little back and forth motion, that over time would pick up momentum, setting a rhythm everyone else could follow when defining, sharing, collaborating and building tooling online.

We all had a little swagger...

For a brief moment, we were all walking together, blindly following the lead that was given to us, while also bringing our own contribution, adding to the momentum day by day, month by month. Somehow we had managed to come together and step in sync, and move in a single direction.

This has all come to and end, as the beasts of business have had their way. There will be no open--just products that call themselves open. There will be no tune for us all to march to, whether you are developer or not. We just have acronyms that only hold meaning to those within the club.

The fun times are over. The rhythm has ceased. Money and business has won over community and collaboration, but for a brief moment we all had swagger in the API space.

from http://ift.tt/1P5fuym

Monday, November 23, 2015

WHy You May Not Find Me At The Bar After Tech Events

When you see me at conferences, you might notice that I am often very engaged while at the event. However after the event lets out, and everyone heads off to the bar or pub, you may not find me tagging along anymore. You see, I am finding it increasingly hard to be present, because of one thing--my hearing. 

You may not know this, but I am completely deaf in my left ear, and only have around 30% left in my right ear. This was the findings in a hearing test I had done in 2007, and I'm assuming by 2015 I have lost even more. Historically I have done pretty well, with a mix of reading lips, and piecing together what words I do hear, but this year I am finding it increasingly difficult to make things work.

As I live longer with my hearing impairment, I find one side effect, is that I tend to feel sounds more than I hear them, and when I'm in loud bars, I tend to feel everything, and hear nothing. This results in me feeling and hearing the loud noises, but actually understanding none of what people around me are saying to me. Overall this is really overstimulating, and after spending a day at a conference is can be very difficult for me, leaving me able to handle no more than maybe an hour or two in loud environments.

I also noticed a couple times recently where people were talking to me, usually on my left side and I did not notice, resulting in confusion, but then when I hear only portions of conversations, and I seem uninterested (as I do not know what is going on), people seemed a little offended--if this was you, I am truly sorry.

I understand not everyone who hangs out with me at events will read this, but I wanted to write it anyways, and gather my thoughts. I will be ditching out of bars earlier than I have in the past, and I'm hoping the folks who really want to hang out with me will join me in a more quieter setting, where I can hopefully be engaged a little more.

Thank you for understanding.

from http://ift.tt/1Npb2KA

Monday, November 2, 2015

Making Sure My Stories Are Linked Properly

When I craft a story for any of my blogs, I use a single content management system that I custom built on top of my blog API. I'm always looking to make it more valuable to my readers by providing the relevant links, but also make it more discover-able, and link-able within my content management and contact relationship management system.

I keep track of how many times I reference companies and people within articles, and the presence of Twitter accounts, and website URLs is how I do this. So when I am writing articles, it is important that people and companies are linked up properly. Here is an example from a post I just wrote, for the API consumption panel at @APIStrat (it hasn't been published yet).

I have tried to automate this in my CMS, so when it sees someones Twitter account, or a company name that already exists in my system, it recommends a link. However it is still up to me to ok the addition of links to Twitter handles, and company names. I do not like this to be fully automated, because I like to retain full editorial control.

I am just sharing this, so that it gets baked into my operations, and I remember to use the link system more, but also just acknowledge how much work it is to make all of my storytelling work, but that ultimately it is worth it. Formally telling on my blog is how I make sure all of this continues to be a reality across my operations.

from http://ift.tt/1KVlxgY

Tuesday, October 27, 2015

Striking The Right Balance With API Evangelist Partners

I get a lot of requests from individuals and companies who want to "partner" with me, with many meanings to what this actually means. As a one person operation I have to be very careful who I engage with, because it is very easy for large organizations, with more resources to drown me in talk, meetings, emails, and other alternatives to actually doing what it is that I do.

Another big challenge in partnering with new startups is that they often do not have a lot of money, lots of potential value, yet an unproven track record. I love believing in new startups, but in a world where they need me in the beginning, and not so much once they've made, I have to be wary any company that walks in the door. You may be a great bunch of folks, but once you pile on enough investors, and changes in executive management along the way--things change. 

I have a lot of startups, VCs, and other companies who love to engage with me, "pick my brain", "understand my expertise", "get my take on where things are going", "craft their strategy", and many other informal ways to tap the experience and perspective I bring to the table. Honestly I thrive on doing this, but after 5 years, and being screwed by numerous startups and open source projects, I am starting to get very weary about who I help.

I understand you are super excited about your new API idea, or service. I can easily get that way too! You see that excitement will fade once you get traction, become attractive to investors, and your willingness to work with me, share your ideas, tools, and resources with the community will fade. Twitter is the best example of this in the wild. No I didn't help Twitter get going, but I've seen the same syndrome play out with 50+ APIs and service provider startups in the last five years of operating.

Startups offer me equity to alleviate my concerns. Yawwwwn! Means nothing. I have a file in the filing cabinet of worthless options. Others offer me some sort of return on traffic I send to them, and conversions I generate. I guess this is a start, but it doesn't touch on the big partner deals I can often help in bringing to the table and educating, and general awareness I work to build in API sector. Traditional tracking mechanisms will only capture a small portion of the value I can potentially generate for an API startup, and honestly is a waste of our time.

This leaves me with just getting to know startups, and dating, as I like to say, for quite a while, before I engage too heavily. My only defense is to be really public with all my partnerships from day one. State publicly, that I am partnering with company X. Then tag each post, white paper, or research project that I do on behalf of a relationship. Don't get me wrong, I get a lot of value of this work I do, otherwise I wouldn't be doing it. However the line I want to draw in the sand, is just a public acknowledgement that I am helping this company figure out their strategy, tell stories, and help shape their layer of the API space.

This post is just me brainstorming a new tier of my partner program, which I'm thinking I will call "strategy and storytelling partners". When I am approached by, or discover a new API company that I find interesting, and I begin investing time and resources into this relationship, I will publish this company on my list of partners. These relationships almost always do not involve money, and usually involve a heavy investment on my part when it comes to helping them walk through strategy, and storytelling around their products, services, and potentially helping define and evolve the sector they are looking to operate within.

In the end, after several weeks of mulling over this subject I do not see any contractual, or technological solution to tracking how I help API startups in the space--it has to be a human solution. I will almost always share the existence of a partnership with a company from day one, and it is up to me to modulate how much "investment" I give, and as this benefits (or not) the startup, it will be up to the company itself to kick-back to me (and the community) as a payback. If you don't, and you end up taking or keeping the lion share of value generate by my work, and the community I bring to the table, it is on you. The timeline will be there for everyone else to judge. #karma

from http://ift.tt/1GGJ2yY

Monday, October 26, 2015

Taking Another Look At The Tech Blogosphere

I used to be more immersed in the world of top tech blogs, when my partner in crime @AudreyWatters worked at ReadWrite, O'Reilly, and I knew more people on the beat. Over the last couple years, as I keep my laser focus on the API space, and often end up going directly to the sources of API news, I have reduced the the priority of major tech blogs in my feeds, and Twitter stream(s).

During thee planning of @APIStrat, and working with some of my clients at API Evangelist and APIWare, the question of other media sources, destinations, and voices in the tech space keeps coming up. As I'm crafting stories lately, I'm asking myself more and more, where should this be published? And maybe there is more value in some of this content being published beyond just my blogs.

To help me understand the landscape I went through the top tech blogs, and hand crafted a list of them, that I feel are relevant, and willing to sumissions, either as full stories or just as news tips. Here is the version 1.0 of this list:

While there are some other major publications with relevant tech news sections, these 24 represent what I'd consider to be the upper crust of accessible tech blogs, that would entertain tips, news submissions, and contributions from the space.

I'm not naive enough to just submit random stories to the list above, but with the right information, or possibly complete story, submitting to some of them might make sense from time to time. I have given quotes, information, and other contributions to some of these publications already, so I already have some inroads built.

In line with my approach to storytelling in the API space, I will also be working to profile, and built relationships with their editors and writers. I'm already used as the "API correspondent" for some writers at the publications listed above, so adding these companies to my monitoring system, where I can slowly add the writers for each publication, seems like a sensible approach.

It will take time, but I will hopefully be able to expand my information network, to include regular contributing, tipping, and sometimes just pointing more of these tech blogs to where I feel the relevant API stories really are.

from http://ift.tt/1LTuc5q

Friday, October 16, 2015

I Am Stumbling Just A Little Bit, Please Bear With Me As I Find My Way

I've been doing API Evangelist for a while now. Most of the time I can make this work, and honestly sometimes I just fucking rock it. Right now, I keep stumbling and falling on my face. I've written the same amount of posts I usually do, but none of them are worthy of posting.

I also find many of the conversations I engage in, I'm overly aggressive--which goes against what I'm about. I'm not apologizing for anything, cause I would ever do anything I don't back up 100%. I just am not my usual self, and I am having trouble figuring out why.

It would be easy to blame some corporate forces that are just pissing me off right now, and people co-opting my work, without any recognition. Honestly this has happened throughout the last five years, and I have nobody to blame but myself.

I always find a way to work through the doldrums, finding my way back tot he center. This particular moment the currents seem a little swifter than normal, and I cannot figure out why. I trust that I will figure this shit out, I just wanted to put out there that I'm working on it.

I cherish my readers, and thrive on shedding light on what is going on. I hope this is just me, and not a signal of what is to come. It is easier if I'm to blame. ;-) See you on the flipside.

from http://ift.tt/1LpJOx4

Tuesday, September 8, 2015

Portable API Driven Or At Least JSON Driven Interactive Visualization Tooling

As I am working on the API and JSON driven visualization strategy for my Adopta.Agency open data work, I saw cloud monitoring platform Librato, publish their new "Space" interface as a Heroku add-on. I like dashboards and visualization tooling that can live on multiple platforms, and engineered to be as portable and deployable as possible.

In a perfect world, infographics would be done using D3.js, and would all show their homework, with JSON or API definitions supporting any visualizations. All of my Adopta.Agency projects will eventually possess a simple, embeddable, D3.js visualization layer that can be published anywhere. Each project will have its JSON localized in the publicly available Github repository, and be explorable via any browser using Github Pages.

The Librato approach reminded me that I'd also like to see modular, containerized versions of more advanced tooling, dashboards, and visualizations around some projects. This would only apply in scenarios where a little more compute is needed behind the visualizations, that could be done with simple D3.js + JSON, hosted on Github. Essentially giving me two grades of portable visualization deployment: light and heavy duty. I like the idea that it could be a native add-on, whereever you are deploying an open API or dataset.

I still have a lot of work to do when it comes to the light duty blueprint of JSON + D3.js, and API + D3.js, to support Adopta.Agency. I will focus on this, but keep in mind doing modular cloud deployments using Docker and Heroku for the datasets that require more heavy data lifting.

from http://ift.tt/1UFBYUn

Saturday, September 5, 2015

Pushing Forward Algorithmic Transparency When It Comes To The Concept Of Surge Pricing

I've been fascinated by the idea of surge pricing, since Uber introduced the concept to me. I'm not interested in it because of what it will do for my business, I'm interested because of what it will do for / to business. Also I'm concerned what this will do the layers of our society who can't afford, and aren't able to keep up with this algorithmic meritocracy we are assembling.

While listening to my podcasts the other day, I learned that Gogo Inflight wifi also uses surge pricing, which is why some flights are more expensive than others. I long suspected they used some sort of algorithm for figuring out their pricing, because some flights I'm spending $10.00 for the flight, and others I m paying $30.00. Obviously they are in search the sweet spot, to make the most money off business travelers looking to get their fix.

Algorithmic transparency is something I'm very interested in, and something I feel APIs have a huge role to play in helping us make sense of just exactly how companies are structuring their cost structures. This is right up my alley, and something I will add to my monitoring, searching for stories that mention surge pricing, and startups who wield this publicly as part of their strategy, as well as those who work to keep it a secret.

This is where my research starts going beyond just APIs, but it is also an area I hope to influence with some API way of thinking. We'll see where it all goes, hopefully by tuning in early, I can help steer some thinking when it comes to businesses approaching surge pricing (or not). 

from http://ift.tt/1JI4ZK1

Being a Data Janitor and Cleaning Up Data Portability Vomit

As I work through the XML, tab & comma separated, and spreadsheet strewn landscape of federal government data as part of my Adopta.Agency work, I'm constantly reminded of how the data being published is often retribution, more than it is anything of actual use. Most of what I find, despite much of it being part of some sort of "open data" or "data portability" mandate is not actually meant to be usable by its publishers.

In between the cracks of my government open data work, I'm also dealing with the portability of my own digital legacy, and working to migrate exported Evernote notes into my system, as well as legacy Tweets from my Twitter archive download. While the communicated intent of these exports from Evernote and Twitter may be about data portability, like the government data, they really do not give a shit about you actually doing anything with the data.

The requests for software as a service providers, and government agencies to produce open data versions of our own user or public data, has upset the gate-keepers, resulting in what I see as passive aggressive data portability vomit--here you go, put that to use!! A president mandating that us database administrators open up our resources, and give up our power? Ha! The users who helped us grow into a successful startup, actually want a copy of their data? Ha! Fuck you! Take that!

This is why there is so much data janitorial work today, because many of us are playing the role of janitor in the elementary school that is information technology (IT), and constantly coming across the data portability vomit that the gatekeepers of legacy IT power structures (1st and 2nd graders), and the 2.0 silicon valley version (3rd and 4th graders), produce. You see, they don't actually want us to be successful, and this is one of the ways they protect the power they perceive they possess.

from http://ift.tt/1KxseMw

Sunday, August 16, 2015

Legacy Power and Control Contained Within The Acronym

As I wade through government, higher educational, and scientific research, exposing valuable data, and APIs, the single biggest area of friction I encounter is the acronym. Ironically this paradigm is also reflected in the mission of API Evangelist -- helping normal people understand what the hell an Application Programming Interface is. I live in a sort of tech purgatory, I am well aware of it. 

The number one reason acronyms are used I think, is purely because we are lazy. Secondarily though, I think there is also a lot of legacy power and control represented in every acronym. These little abbreviated nuggets can be the difference between you being in the club, or not. You either understand the technology at play, or you don't. You are in the right government circles, or not. You are trained in a specific field, or you are not. I don't think people consider what they wield when they use acronyms, I think there is a lot of baked in, subconscious things going on.

One of the most important aspects of the API journey in my opinion, is that you begin to unwind a lot of the code (pun intended) that has been laid down over the years of IT operation, government policy, and research cycles. When you begin to unwind this, and make available via intuitive URL endpoints, you increase the chances a piece of data, content, or other digital resource will get put to use--something not all parties are actually interested in. Historically IT, government, and researchers wield their power and control, but locking up valuable resources, playing gatekeeper of who is in, and who is out--APIs have the potential to unwind this legacy debt.

APIs do not decode these legacy corporate, government, and institutional pools of power and control by default. You can just as easily pay it forward with an API gateway, or via an API architect who sees no value in getting to know the resources they are putting to work, let alone it's consumer(s). However if done with the right approach, APIs can provide a rich toolbox that can assist any company, institution or government agency in decoding the legacy each has built up.

You can see this play out in the recent EPA, er I mean Environment Protection Agency work I did. Who would ever know that the EPA CERCLIS API, was actually the Comprehensive Environmental Response, Compensation, and Liability Information System API? You don't unless you are in the club, or you do the heavy lifting (clicking) to discover the fine print. I am not saying the person who named the Resource Conservation and Recovery Act Information API, the RCRAInfo service, were malicious in what they are doing--this type of unconscious behavior occurs all the time.

Ultimately I do not think there is a solution for this. Acronyms do provide us with a lot of benefit, when it comes to making language, and communication more efficient. However I think, just like we are seeing play out with algorithms, we need to be more mindful of the legacy we paying forward when we use acronyms, and make sure we are as transparent as possible by providing dictionaries, glossaries, and other tooling. 

At the very least, before you use an acronym, make sure your audience will not have to work extra hard to get up to speed, and do the heavy lifting required to reach as wide as possible audience as you possibly can. It is the API way. ;-)

from http://ift.tt/1fj97aj

Saturday, August 15, 2015

Asking For Help When I Needed To Better Understand The Accounting For US Federal Budget

As I was working my way through the data for the US federal budget, I noticed a special row in between the years 1976 and 1977. It simply had the entry TQ, and no other information available about what it was. 

To get an answer regarding what this entry was, I went to my Twitter followers:

Then, because I have the most amazing Twitter followers ever, got this response from Stephen H. Holden (@SteveHHolden):

When doing any open data work, you can't be afraid to just ask for help when you hit a wall. I've been doing data work for 25 years, and constantly hit walls when it comes to formatting, metadata, the data itself.

The moral of this story is use your Twitter followers, use your Facebook and LinkedIn followers, and make sure and publish questions as a Github issue--then always tell the story!

from http://ift.tt/1TGSWXl

Friday, August 14, 2015

Stepping Up My Open Data Work With Adopta.Agency, Thanks To Knight Foundation, @3Scale, and @APISpark

I always have numerous side project cooking. Occasionally I will submit these projects for potential grant funding. One of my projects which I called Federal Agency Dataset Adoption, was awarded a prototype grant from the Knight Foundation. It was the perfect time to get funding for my open data work, because it coincided with the Summer of APIs work I'm doing with Restlet, and work already in progress defining government open data and APIs with 3Scale.

After reviewing my Federal Agency Dataset Adoption work, I purchased a domain, and quickly got to work on my first two prototype projects. I'm calling the prototype Adopta.Agency, and kicking it off with two projects that reflect my passion for the project.

US Federal Budget
This is a project to make the US federal budget more machine readable, in hopes of building more meaningful tools on top of it. You can already access the historical budget via spreadsheets, but this project is work to make sure everything is available as CSV, JSON, as well as an active API.

VA Data Portal
This project is looking to move forward the conversation around VA data, making it more accessible as CSV and JSON files, and deploying simple APIs when I have the time. The VA needs help to make sure all of its vital assets are machine readable by default.

The first month of the project will be focused on defining the Adopta Blueprint for the project, by tackling projects that my partner in crime Audrey Watters (@audreywatters), and I feel are important, and set the right tone for the movement. Once the blueprint is stable, we ill be inviting other people into the mix, and tackle some new projects.

Adopta.Agency is not a new technology, or a new platform, it is an open blueprint that employs existing services like Github, and tools like CSV to JSON converter, to help move the government open data movement forward just one or two steps. The government is working hard, as we speak, to open up data, but these agencies don't always have the skills and resources to make sure these valuable public assets are ready for use in other websites, applications, analysis and visualizations--this is where we come in!

With Adopta.Agency, we are looking to define a Github enabled, open data and API fueled, human driven network that helps us truly realize the potential of open data and APIs in government -- please join in today.

from http://ift.tt/1DTylYx

Being The Change We Want To See In Open Government Data With Adopta.Agency

I have had a passion when it comes to open data for a number of years. Each time the federal budget has come out in the last 10 years, I would parse the PDFs, and generate XML, and more recently JSON, to help me better understand how our government works. I've worked hard to support open data and APIs in the federal government since 2012, resulting in me heading to Washington DC to work on open data projects at the Department of Veterans Affairs (VA) as a Presidential Innovation Fellow (PIF)

I understand how hard it is to do open data and APIs in government, and I am a big supporter of those in government who are working to open up anything. I also feel there is so much work left to be done to augment these efforts. While there are thousands of datasets now available via Data.gov, and in the handful of data.json files published by federal agencies, much of this data leaves a lot to be desired, when it comes to actually putting it to use.

As people who work with data know, it takes a lot of work to clean up, and normalize everything--there is just no way around this, and much of the government data that has been opened up, still needs this janitorial work, as well conversion into a common data format like JSON. When looking through government open data you are faced with spreadsheets, text files, PDFs, and any number of other obscure formats, which may meet the minimum requirements for open data, need a lot of work to get it truly ready for use in a website, visualization, or mobile application.

Adopta.Aency is meant to be an open blueprint, to help target valuable government open data, clean them up, and at a minimum, convert them to be available as JSON files. When possible, projects will also launch open APIs, but the minimum viable movement forward should be about cleaning and conversion to JSON. Each project begins with forking the Adopta Blueprint, which walks users through the targeting, cleaning, and publishing of data to make it more accessible, and usable by others.

Adopta.Agency employs Github repositories for managing the process, storage and sharing of data files, while also acting as gateway for accessing the APIs, and engaging in a conversation around how to improve upon data and APIs available as part of each project (which is what APIs are all about). Adopta is not a specific technology, it is a blueprint for using commonly available tools and services, to move government open data forward one or two steps. 

We feel strongly that making sure government open data available in a machine readable format, can be a catalyst for change. Ironically, even though this data and APIs are meant for other computers and applications, we need humans to step up, and be stewards of an ongoing portion of the journey. Government agencies do not have the skills, resources, and awareness to do it all, and when you actually think about the big picture, you realize it will take a team effort to make this happen.

Adopta.Agency is looking to define a Github enabled, open data and API fueled, but ultimately human driven network to help everyone realize the potential of open data and APIs in government -- please join us today.

from http://ift.tt/1KmmeAe

Thursday, August 13, 2015

Forget Uber, If You Build A Platform That Feeds People Like This, Then I Will Care

I was listening to the To Cut Food Waste, Spain's Solidarity Fridge Supplies Endless Leftovers segment on NPR today, which made me happy, but then quickly left me sad regarding 99% of the tech solutions I see being developed today. The tech sector loves to showcase how smart we all are, but in the grand scheme of things, we are mostly providing solutions to non-problems, when there is a world full of real problems needing solved.

I remember being at MIT for a hackathon a couple years back, where when we were done with the catered food for our event, the food was taken down to a corner in a hallway, that had a table, and a webcam. After putting the bagels, pizza, juice, and other items on the table, within about 20 minutes, it was gone--students fed, and food not wasted. #winning

The solidarity fridge idea reminded me of this experience, and it makes me sad that there is not an Uber for fucking feeding people! Why the hell isn't there a solidarity fridge and pantry on every street corner in the world? Why don't we have Uber for this idea? Why aren't there food trucks doing this? Oh, because there is no fortune to be made on actually making sure people are being fed, and Silicon Valley really doesn't give a shit about solving real problems, it is just what we tell ourselves so we can sleep at night.

If you are building a platform that helps neighborhoods manage their solidarity fridge and pantries, complete with API, mobile and web apps, and SMS push notifications, then you will see me get real excited about what you are doing--until then...

from http://ift.tt/1Ne4vjO

Wednesday, July 22, 2015

Micro Attempts At Being The Change I Want To See in Government

One by-product of being as OCD as I am, is that I am always looking for the smallest possible way that I can help grease the wheels of the API economy. A big part of helping the average person understand any company or API, is possessing a simple image to represent the concept, either a screenshot, logo, or other visualization. A picture is worth a thousand words, and as essential to API operations, as your actual service.

As I worked to understand the agencies that power our federal government, I quickly realized, I needed a logo for each of the 246 federal agencies--something that didn't exist. I could find many on Wikipedia, and Google for the others, but there was no single source of logos for federal agencies--even at the Federal Government Directory API from USA.gov. Unacceptable, I created my own, and published to Github. 

Ultimately, I am not happy with all of the logos I found, and think it can be greatly improved upon, but it provides me with a base title, description, and image for each of our federal agencies. It is something you can find in the Github repository for my federal government API research, and a JSON representation of all federal agencies + logos under the data folder for the research.

It took me about 6 hours to do this work, and it is something I know has been used by others, including within the federal government, as well as across numerous of my own API research, and storytelling. These are the little actions I enjoy inflicting, helping to wield APIs, and machine readable, meaningful, openly available, micro data-sets that can be used in as many scenarios as possible. Logos might seem irrelevant in the larger open data war, but when it comes to the smaller skirmishes a logo is an important tool in your toolbox.

from http://ift.tt/1SD5r0a

Friday, July 3, 2015

Use Of APIs By Regulators To Audit Telco Behavior

I keep reading stories about federal regulators investigating, and issuing fines to telcos like AT&T paying $105 million for unauthorized charges on customer bills, and Verizon and Sprint to pay $158 million for cramming charges on customers' bills. Maybe I am biased (I am), but I can't help think about the potential for APIs, and OAuth to help in this situation.

As an AT&T, and Verizon customer, I can say that I could use help in auditing my accounts. I'm sure other users would pay for a service that would help monitor their accounts, looking for irregularites. I think about services like Cloudability, that help me manage costs in my cloud computing environement--why aren't there more of these things in the consumer realm?

If all services that are available online simply had APIs for their accounts, this would be possible. It would also open up the door for government agencies, and public sector organizations to step up and provide research, auditing, and potentially data counseling for the average citizen and consumer. 

I want more access to the data I generate via the telcommunication companies. I also want to be able to take advantage of services that help me manage my relationships with these companies. I also think there should be a certain amount of regulatory acess and control introduced into all of this, and APIs provide not just a programmatic way to do this, it can be done in a real-time way, that might provide the balance we need--rather than just every few years when the feds have the information they need, and the enforcement power they need to take action.

from http://ift.tt/1H5wvCf

Friday, June 19, 2015

A Better Understanding When It Comes To Licensing Of Data Served Up Through APIs

Through my work on API Evangelist, and heavy reliance on Github, I have a pretty good handle on the licensing of code involved with APIs--I recommend following Githubs advice. Also derived from my work on the Oracle v Google copyright case, and the creation of API Commons, I have a solid handle on licensing of API interfaces. One area I am currently deficient, and is something that has long been on my todo list, is establishing a clear stance on how to license data served up via APIs.

My goal is to eventually craft a static page, that helps API providers, and consumers, better understand licensing for the entire stack, from database, to server, the API definition, all the way to the client. I rely on the Open Data Commons, for three licensing options for open data:

I am adding these three licensing options to my politics of APIs research, and will work to publish a single research project that provides guidance in not just licensing of data served up through APIs, but also addresses code, definitions, schemas, and more. 

The guidance from Open Data Commons is meant for data owners who are looking to license their data before making available via an API, if you are working with an existing dataset, makes sure and consult the data source on licensing restrictions--making sure to carry these forward as you do any additional work.

from http://ift.tt/1L7yrwO

Monday, May 11, 2015

On Encountering Skeptical Views Around Open Data

I spend a lot of time talkng about open data in business, and government of all shapes and sizes. This topic was front and center at APIDays Berlin / APIStrat Europe, and APIDays Mediterranea. Open data was a part of numerous talks, but most importantly dominated conversations in the hallways, and late into the night at the drinking establishments we gathered.

In my experience there are four camps of people when it comes to open data:

  1. Those who know nothing about open data
  2. Those who don't know much, but have lots of opinions
  3. Those who have experience, and over promise the results
  4. Those who have experience, and get hands dirty

I'd say overwhelmingly the people I met in my latest travels were in the first bucket, or the fourth bucket. However I did meet a handful of folks who I put in the second bucket, and were very dismissive of the potential of open data. In my experience these people either listened to the rhetoric of people in bucket three, or just don't have the experience that many of the rest of us have.

I agree that the state of open data our of city, state, and federal level government programs is often lacking much of what we'd like to see in a healthy, mature program. What I feel skeptics miss, is hands on experience making this happen in government (this shit is hard), and a willingness to help take things to the next level. This takes an effort from all of us, not just the people in government--there is a lot you can do from the outside to help make things better (not just criticize).

It feels like we are getting past a lot of the damage created by early open data rhetoric, that I felt over-promsied, and under-delivered. Something we have to learn from in future storytelling. I don't feel like all open data skeptics, and critics are required to get their hands dirty, but I guarantee if you work on a couple of hands on projects--your views will change.

from http://ift.tt/1dYmyMY

Tuesday, May 5, 2015

Shhhhh, Be Very Very Quiet, I Am Hunting Mainsplainers

For the most part I ignore the bullshit that flows in my girlfriend @audreywatters Twitter timeline (yes I am watching). We both tend to write some pretty critical things about technology, but for some reason (hmmm, what could it be), her timeline is full of some pretty vocal "dudes" looking to set her straight. I just do not have the energy to challenge every sexist male, looking to tell her she is wrong, but every once in a while I just need to vent a little--so I go hunting mansplainers in her Twitter timeline. 

One young white fellow, wins the prize this week, he got my attention, resulting in a conversation that ended in this response:

Yeah, the days she was writing that, and were discussed all the details, gave me no insight into the logic, let alone the last five years of discussing this topic with her. During my mainsplainer hunting, I'm not out to convince these dudes of how out of line they are, honestly I'm just looking to fuck with them, and let them know I'm here. I do not know the answer to helping us sexist men learn the error our ways. Yes, even I have sexist tendencies--only difference is that I am well on my way to learning. You see I am white, male, and even though I grew up very poor, raised by a single mother, I still have enjoyed a very priveleged existence for most of my life. 

I could easily cherry pick specific Tweets from this dude, showing his flipping flopping nature, where he blames Audrey for specific things he can't actually cite in her post, and talks of her blaming these other men he's defending for doing what he claims as sinister things, wait no sinister was his reference in Twitter conversation with someone else. No wait, the last paragraph in her post alludes to this. I just need to be able to follow the Twitter thread to understand his point. Why am I so dense?

Look, I don't give a shit buddy. I'm just fucking with you because you are spouting stupid shit in her timeline. I really don't give a fuck where you are coming from. If you knew the number of dudes I've seen tell her how wrong she is, to she needs to shut the fuck up, to hacking my websites and telling me to keep her in line, you'd go away pretty quickly (you are in good company). You need to tune into the bigger conversation, and not feel the need to tell women they are wrong. The reason you feel this way is you don't see her as expert because she is a woman. Period. 

What people like you should do is, write a response on your own blog, in your domain, and reply simply with "here are my thoughts". Then you can lay out all the detail you need, cite your own sources, and hopefully do as much work as she did when crafting her story. Then if she cares (she won't), she can reply on her blog, and return the favor to you. I know what you are going to say, oh I can't even open my mouth without mainsplaining? Probably not. You are clueless of the bigger picture, except the view from your own position.

I'm not saying everything that Audrey says is right, but I am saying you need to step back, and analyze your approach. One thing I've learned during my time running a business with my ex-wife, and the amazing five+ years I've spent with Audrey, is there is more to this, then us men can ever imagine. I disagree with a lot of things I read online, most of them I do not ever respond to, and the things I do, I critically evaluate how I respond--I just do not vomit my priveleged position into people's timeline. 

I know, futile effort. I can never change these types of people's behavior, but I just can't help hunting the mainsplainers in her timeline, and vent, while letting them know I'm sitting by her side. If you have any other comments or questions, please read Is My Girlfriend Bothering You?


from http://ift.tt/1QiriLg

Thursday, March 12, 2015

I Have Gotten More Return On the Ideas I Have Set Free Than Any I Have Locked up

When I walkthrough the junkyard of startups, and business ideas in my mind, I can’t help but feel that much of my current success with API Evangelist has more to do with open ideas, than it does any other aspect. I have numerous startups under my belt, where I tried to capitalize on ideas I've had, ranging from selling wine online to real estate data, but nothing like what I'm doing now.

Do not get me wrong, I’ve had a lot of success along the way, but nothing that compares to the feelings of success I have with API Evangelist. Other than right place, and at right time, I cannot come up with much that is different from API Evangelist, and previous work—except the fact that API Evangelist is focused on opening up and freeing every idea that comes along.

This type of approach to business might not be right for everyone. I’m sure I’ve also passed up on some pretty lucrative opportunities to monetize around my ideas, but in the end, I mostly enjoy making enough money to get by, and generating as much positive exhaust around my ideas as I can. I'm not saying all business needs to think like this, but the more API-centric your business is, I think the more you have to consider the repercussions of locking up ideas vs. setting them free.

from http://ift.tt/1wBIW70

Monday, February 23, 2015

Making Sense At The100K Level: Twitter, Github, And Google Groups

I try to make sense of which companies are doing interesting things in the API space, and the interesting technologies that are done by these companies, that sometimes take on a life of their own. The thing I wrestle with with constantly, is how do you actually do this? The best tools in my toolbox currently are Twitter and Github. These two platforms provide me with a wealth of information about what is going on within a company, or specific project, the surrounding community, and the relationships they have developed (or not), along the way.

Recently I’ve been spending time, diving deeper into the Swagger community, and two key sources of information are the @swaggerapi Twitter account, and the Swagger Github account, with its 25+ repositories. Using each of these platform APIs, I can pull followers, favorites, and general activity for the Swagger community. Then I come up against the SwaggerSocket Google Group. While there is a rich amount of information, and activity at the forum, with a lack of RSS or API, I can’t make sense of the conversation at a macro level, alongside the other signals I’m tracking on—grrrrrr.

At any time I can tune into the activity on Twitter, and Github for the Swagger community, but the Google Group takes much more work, and I have to go to the website to view, and manually engage. Ideally I could see Twitter, Github, and Google Group activity side by side, and make sense of the bigger picture. I can get email updates from the forum, but this applies from now forward, and gives me no context of history of the conversation within the group—without visiting the website.

Just a side rant from the day. This is not a critique of the Swagger community, just an outside view on the usage of Google Groups as an API community management tool. I use the platform for APIs.json and API Commons, but I think I might work on a better way to manage the community, one that allows outside entities to better track on the conversation. 

from http://ift.tt/1Eqi2jU

Sunday, February 8, 2015

Emails From People Saying Nice Things And Not Wanting Anything From Me

I process my thoughts through stories on my blogs, and often times you'll find me bitching about people and companies here on kinlane.com. Other times you'll find me waxing poetic about how nice people can be—welcome to my bipolar blogging world.

In this post, I want to say how much I like finding nice emails from people in my inbox, especially when they don’t want anything from me. Getting these nice notes from people, about specific stories, topics, or just generally thanking me for what I do, makes it all worth it.

Ok, I'll stop gushing, but I just wanted to say thank you—you know who you are.

from http://ift.tt/1A91Cee

Friday, February 6, 2015

An Archive.org For Email Newsletters Using Context.io

I’m not going to beat around the bush on this idea, it just needs to get done, and I just don’t have the time. We need an archive.org for email newsletters, and other POP related elements of the digital world we have created for ourselves. Whether we love or hate the inbox layer of our life, it plays a significant role in crafting our daily reality. Bottom line, we don’t always keep the history that happens, and we should be recording it all, so that we can pause, and re-evaluate at any point in the future.

I cannot keep up with the amount of newsletters flowing into my inbox, but I do need to be able to access this layer, as I have the bandwidth available to process. Using Context.io, I need you to create an index of popular email newsletter indexes that are emerging. I feel like we are seeing a renaissance in email, in the form of the business newsletter--something I don't always have the time to participate in.

During the course of my daily monitoring, I received an email from Congress.gov, about a new legislative email newsletter, something that seems like something I’d be interested in, but then immediately I’m questioning my ability to process the new information:

  • A specific bill in the current Congress - Receive an email when there are updates to a specific bill (new cosponsors, committee action, vote taken, etc.); emails are sent once a day if there has been a change in a particular bill’s status since the previous day.
  • A specific member’s legislative activity - Receive an email when a specific member introduces or cosponsors a bill; emails are sent once a day if a member has introduced or cosponsored a bill since the previous day.
  • Congressional Record - Receive an email as soon as a new issue of the Congressional Record is available on Congress.gov.

This is all information I’m willing to digest, but ultimately have to weight it alongside the rest of my information diet—a process that isn’t always equitable. If I could acknolwedge an email newsletter as something that I’m interested in, but only when I had time, I would be open to the adoption of a new service.

We need to record this layer of our history, something that our inboxes just aren’t doing well enough. I think we need a steward to step up, and be the curator of this important content that is being sent to our inboxes, and doesn’t always exist on the open Internet. Technically I do not think it would be too difficult to it using Context.io, I just think someone needs to spend a lot of time signing up for newsletters, and being creative in crafting the interface, and index for people to be able to engage with in meaningful ways, that people will actually find useful and pay for.

from http://ift.tt/1KoYwWh

Tuesday, February 3, 2015

A Machine Readable Version of The Presidents Fiscal Year 2016 Budget On Github

The release of the the president's fiscal year 2016 budget in a machine readable format on Github was one of the most important things to come out of Washington D.C. in a while when it comes to open data and APIs. I was optimistic when the president mandated that all federal agencies need to go machine readable by default, but the release of the annual budget in this way is an important sign that the White House is following its own open data rhetoric, and something every agency should emulate.

There is still a lot of work to be done to make sense of the federal budget, but having it published in a machine readable format on Github saves a lot of time, and energy in this process. As soon as I landed on the Github repository, clicked into the data folder, and saw the three CSV files, I got to work converting them to JSON format. Having the budget available in CSV is a huge step beyond the historic PDFs we’ve had to process in the past, to get at the budget numbers, but having it in JSON by default, would be even better.

What now? Well, I would like to make more sense of the budget, and to be able to slice and dice it in different ways, I’m going to need an API. Using a Swagger definition, I generated a simple server framework using Slim & PHP, with an endpoint for each file, budauth, outlays, and receipts. Now I just need to add some searching, filtering, paging, and other essential functionality, and it will be ready for public consumption--then I can get to work slicing and dicing the budget, and previous years budgets in different ways.

I already have my eye on a couple D3.js visualizations to help me make sense of the budget. First I want to be able to show the scope of budget for different areas of government, to help make the argument against bloat in areas like military. Second, I want to provide some sort of interactive tool that will help me express what my priorities are when it comes to the federal budget--something I've done in the past.

It makes me very happy to see the federal government budget expressed in a machine readable way on Github. Every city, county, state, and federal government agency should be publishing their budgets in this way. PDF is not longer acceptable, in 2015, the minimum bar for government budget is a CSV on Github—let’s all get to work!

from http://ift.tt/1DCnz46

Saturday, January 31, 2015

My Smart Little (Big) Brother And Programmatically making Sense Of PDFs

I was in Portland, Oregon a couple of weeks ago, and one of the things I do when I visit PDX is drink beer with my little (big) brother Michael (@m_thelander). He is a programmer in Portland, working diligently away at Rentrak. Unlike myself, Michael is a classically trained programmer, and someone you want as your employee. ;-) He’s a rock solid guy.

Anyhoo. Michael and I were drinking beer in downtownt Portland, and talking about a project he had worked on during an internal hackathon at Retrak. I won’t give away the details, as I didn’t ask him if I could write this. :-) The project involved the programmatic analysis of thousand of PDFs, so I asked him what tools he was using to work with PDFs?

He said they were stumbling on the differences between the formatting of each PDF, and couldn’t get consistent results, so they decided to just save each page as an image, and used the tesseract open source OCR engine to read each image. Doing this essentially flattened the differences between PDF types, giving him additional details provided when you use tesseract.

It may not seem like much, but ultimately it is a very interesting approach, and as I continue doing big data projects around things like patents, I’m always faced with the question—what do I do with a PDF? I will have to steal (borrow) from my smart little brothers work and build a tesseract API prototype.

from http://ift.tt/1yYL8Es

Wednesday, January 28, 2015

Why Are You So Hard To Get A Hold Of?

This is another post in my ongoing series of regular responses I give to people. Meaning when I get asked something so much, I craft blog posts that live on kinlane.com, and I reply to emails, tweets, etc. with a quick link to my standardized responses.

One I get less frequently, but still enough to warrant a response to, “why are you so hard to get a hold of?"

To which the answer is, "I’m not". I have a phone number that are very public, I have 3 emails all going into same inbox, a highly active Twitter, LinkedIn, Facebook, and Github presence. If you are having trouble getting a hold of me, it is because you are not using the right channels, or potentially the right frequency.

First, I don’t talk on the phone. I schedule meetings, increasingly only on Thursdays (regularly for partners, etc.), where i talk on skype, ghangout, and occasionally the phone. When I talk on these channels, I can do nothing else. I can’t multi-task. I am present. If I did this all the time, I wouldn’t be the API Evangelist—I’d be that phone talker guy.

Second, I respond well to quick, concise emails, tweets, wall posts, and github issues. Shorter, the more concise the better. This is what I mean by frequency, if you send me a long-winded email, there is good chance it could be weeks or even never that will respond. Sorry, I just don’t have the bandwidth for that frequency—I use short, precise signals.

I do not have a problem with someone being a “phone person”, but I’m not, sorry. In my experience people who require lots of phone calls, also require lots of meetings, and often shift in their needs, because it isn’t anchored to any specific outline, document, or project requirements. Personally I try to avoid these types of personalities, because they have proven some of the least efficient, and most demanding relationships in my professional life.

Please don't take this message the wrong way, I'm trying to help you be as successful as you can in making the right connection.

from http://ift.tt/1tvcNME

There Is A Good Chance That I Will Be Remembered For What You Did, Because I Told The Story

My friend Matthew Reinbold (@libel_vox) wrote a great piece on his blog titled, Storytelling and The Developer’s Need To Communicate, reflecting on an un-conference session I did last July at API-Craft in Detroit. Thanks for the great thoughts on storytelling Matt, something that is super infectious, and has reminded me a related story, which I hope continues to emphasize the importance of storytelling in API space.

Another one of my friends that I thoroughly enjoy swapping stories with at API conferences, and in the dark corners of bars around the world, is Mike Amundsen (@mamund). Now I may have the name wrong, but one time Mike told me a story about how John von Neumann (correct me if I’m wrong Mike), is known for a lot of ideas that he didn’t necessarily come up with on his own. He was just such a prolific thinker, and storyteller, which allowed him to process other people’s ideas, then publish a paper on the subject before anyone else could. Some people would see this as stealing of ideas, but one can also argue that he was just better at storytelling.

While I have developed many of my own ideas over the years, much of what I write about is extracted from what others are up to across the API space. I have made an entire career out of paying attention to what technologists are doing, and telling a (hopefully) compelling story about what I see happening, and how it fits into the bigger API picture. As a result, people often associate certain stories, topics, or concepts to me, when in reality I am just the messenger—something that will also play out in the larger history, told in coming years.

I’m not that old, but I’m old enough to understand how the layers of history lay down, and have spent a lot of time considering how to craft stories that don’t just get read, but they get retold, and have a way better chance of being included in the larger history. As Matthew Reinbold points out, all developers should consider the importance of storytelling in what they do. You don’t have to be a master storyteller, or super successful blogger, but your ideas will be much better formed if storytelling is part of your regular routine, and the chances you will be remembered for what you did, increases with each story that you tell.

from http://ift.tt/1CzDYaY

Tuesday, January 27, 2015

Cybersecurity, Bad Behavior, and The US Leading By Example

As I listened to the the State of the Union speech the other day, and stewed on the topic for a few days, I can’t help but see the future of our nations cybersecurity policy through the same lens as I view our historic foreign policy. In my opinion, we’ve spent many years behaving very badly around the world, resulting in very many people who do not like us.

Through our CIA, military, and general foreign policy we’ve generated much of the hatred towards the west that has resulted in terrorism even being a thing. Sure it would still exist even if we didn’t, but we’ve definitely fanned the flames until it has become the full-fledged, never-ending profitable war it has become. This same narrative will play out in the cybersecurity story.

For the foreseeable future, we will be indundated in stories of how badly behaved Russia, China, and other world actors are on the Internet, but it will be through our own bad behavior, that we will fan the flames of cyberwarfare, around the world. Ultimately I will be be reading every story of cybersecurity in the future, while also looking in the collective US mirror.

from http://ift.tt/1uZSQyK