Thursday, December 10, 2015

Machine Learning Will Lead When It Comes To Algorithmic Rent-Seeking

I've long been searching for a term to describe a concept that I see across the API space, where API providers, API consumers, and API service providers take more from the space, than they give back. This concept shows up across the API space in many forms, preying upon the open nature of the open nature of the API sector, and looking to extract value, and generate revenue on the backs of the hard work of others. 

After listening to the audio book version of the Price of Inequality by Joseph E. Stiglitz on a recent drive, I re-learned the term rent-seeking. A phrase I had heard before, but had not fully grasped its potentially wide meaning, when applied to financial products, natural, and other types of common resources--Google gives me the following definition:

Rent-Seeking When a company, organization or individual uses their resources to obtain an economic gain from others without reciprocating any benefits back to society through wealth creation.

Rent-seeking is common practice within the API layer of the Internet. When you consider the concept, and apply to the unlimited number of digital resources that are being exposed via APIs, you begin to see unlimited possibilities for rent-seeking, when transparency is not present. Now that I have the the seed planted in my head, and a phrase to apply, I will be exploring this concept more, but I couldn't help but think about one of the biggest offenders I'm seeing unfold across the space--machine learning.

Don't get me wrong. There will be a lot of machine learning solutions that will help move our digital world forward, but there are also lots of smoke & mirror machine learning and big data solutions, which will purely be seeking resources to mine. As an API provider, it can be easy to focus on the individual API consumers who are bad actors, when in reality you should be thinking much bigger, and the potential partners or service providers, who can consume large amounts, or even all of your resources.

This dark side of machine learning that I am focusing on will include the artificial intelligence, machine learning, big data, and analytics providers who will be selling you magical solutions and lofty promises, which will require the ingestion your valuable data, content, and algorithmic resources, before they can return said magic to you. Some of these solutions will offer more value than they consume, but many will not. If you are the steward of a valuable corpus of data, content, and have crafted valuable algorithms, many large companies will be approaching you in coming months and years, interested in helping you generate insights from these valuable resources.

Machine learning isn't bad. I just want to help you be aware that there are providers who just want to mine your resources, looking to add value to their own data, content, and algorithms, or possibly even just passing on, and selling it directly to other providers. Machine learning will be big business, and something that will dramatically incentivize rent-seeking behavior hidden behind an algorithm. I'll be telling stories of other rent-seeking behavior that I see in the API space, but at this point I feel like machine learning will lead when it comes to algorithmic rent-seeking in the API economy.



from http://ift.tt/1QhQAMi

Friday, December 4, 2015

Not Every Band Needs To Sign With A Major Record Label Or Become An Orchestra

I used to work  in the music industry back in the 90s, and sometimes the tech sector reminds me of this time hustling in the music space. There are definitely a lot of things that are different, but the business of the tech space, often reminds me of some of the business currents that exist in the music industry. 

When you understand the lay of the land in the music industry, there are three distinct spheres of operating:

  • Independent - Small, successful bands who build a solid audience, and are able to make a living.
  • Label (Failure) - Bands that feel the need to be the next Led Zeppelin, and sell their souls.
  • Label (Success) - The small portion of bands who actually find large scale success, and get all the attention.

Most of the work I did in the music promotion space, existed within the world of independent or label (failure), with only one semi successful band, that had a brief flash of success, and I'd still actually put them in the label (failure) bucket, now that I think about it. There is a lot of money to be made as a promotion company selling services to bands who think they are going to make it big--they tend to sell everything, beg, borrow and steal to pay you. (sounds familiar)

There are some independent bands understand that they can actually make a decent living, making music through a mix of selling music, merchandise, and touring. Most other bands think they need to sign with a major label, when in reality the labels are just gambling and playing the numbers on the bands they sign. Only a small percentage of any record labels catalog will make them big money, the rest--marbles on the roulette table. Smart bands know they can do well, by playing good music, and running the business side well, where the not so smart bands always have to be famous, with only a small fraction ever making it--the rest burnout and fade away.

In the end, not every band needs to sign with a major record label, and not all bands needs to aspire to be an orchestra. This reminds me of tech startups, in that not all startups need to scale and sign with a VC, and definitely do not need to scale and become the next enterprise organization.



from http://ift.tt/1lDqczg

Thursday, December 3, 2015

If I Cannot Scale It On My Own Or Use A Service Provider I Do Not Scale It

I have had numerous startup ideas, half to full baked over my 25 year career, with just two I consider to be a success. All of them taught me massive lessons about myself, business, and building relationships with other people. All of this experience has gone into my invention of the API Evangelist persona, and there some hard learned lessons that dictate how I grow and scale what I do.

I do not scale anything I cannot scale by developing a new API, or putting a software as a service provider to use (which I can afford). The concept of hiring something does come into the picture from time to time, but always quickly fades. I feel this provides me with some very healthy constraints, that pushes me to really think through what scale is to API Evangelist. 

I have a huge laundry list of things I would like to do, but because it has to wait until I have the time, and energy to do, much of it never gets done, and that is a good thing. If it is critical, I will do it. If I feel it will help the API community at this point in time, I will do. If I can convince someone else to do, I will. ;-) Sometimes I just wait until someone else does it, and purchase their service. 

I am not saying this is an approach all companies should follow, I'm just sharing what approach works for me. My growth in site traffic, blog posts, industry guides, and revenue, has all happened slowly and steadily over time, in sync with the amount of work I do, and how much I am able to scale my own operations. This post is just a reminder for myself, to not get frustrated with my massive todo list, and scaling of API Evangelist has occurred, and it will never come as fast as I would like. 



from http://ift.tt/1TDAbQP

Tuesday, December 1, 2015

All My Code Snippets Now Live As APIs Which Makes Them Way More Discoverable

Historically, I have a "working" Amazon EC2 micro instance, that is my playground for writing code. This is where I begin all my coding adventures, and often is where most of them end up living forever. I have a lot of great work here, that I easily forget about, shortly after pushing the ideas out of my head, and on to the server via my IDE. 

I have never had a way to index, search, or discover what this wide array of coding projects that I have produce--if a piece of code never gets a link in my administrative interface, it will often be lost forever. Sometimes I will write some code to support an idea, only to find a folder adjacent to it that does the same thing. Doh! I forget I ever did that!

With my new approach to managing my API stack as a series of Github repositories, the ultimate goal of any coding project, is to wrap it as a simple API. As I wrap up any little snippet of code, the final step is always to publish it to an existing repo, or create a new repo, and make sure it is available as a simple API endpoint.

As an API endpoint, in my API stack, every piece of code becomes discoverable via the APIs.json file, which I can browse via the Github Pages portal, or programmatically via its JSON. I'm sure some endpoints I may never use, but at least its available in my toolbox as an API, and who knows I may eventually put it to work, or evolve it as part of some future idea.

I'm already seeing a significant increase in my own operational efficiency because I have my earlier code toolbox available as an API stack.



from http://ift.tt/1InkoUZ

Friday, November 27, 2015

Update 265 Pages, and 175 Links On My Network To Support The Swagger to OADF Shift

I have written 265 separate posts, across the API Evangelist network about Swagger, in the last three years. To reflect the recent shift of Swagger into the Open API Initiative (OAI), and the specification being reborn as Open API Definition Format (OADF), I wanted to update all the stories I've told over the years, to help educate my readers of the evolution, and provide the most relevant links possible.

Using my Linkrot API, which helps me manage the links across my network of sites, I've identified all the pages with Swagger relevant links, and made sure they are updated to point at the most recent material. I've also added a banner to each of the 265 posts, helping educate readers who come across these posts, regarding the transition from Swagger to OADF, and help them understand where to find the latest information.

My network of sites is meant to be my workbench for the API space, and provide the latest information possible about what drives the API sector. It is important to me that the information is as accurate as possible, and my readers stay in tune with the shifts of the APIs space, and where they can find what they need to be successful.

In the end though, all of this is really just business. 



from http://ift.tt/1Q3sPWC

Tuesday, November 24, 2015

For A Brief Moment We All Had Swagger In The API Space

For a brief moment in the API space, we all had swagger. When it all began, we were working towards interoperability in a digital world where none of us actually wanted to work together, after being burnt SOA bad before. We all pretended to be working together, but in reality we were only operating on just a handful of available VERBS, until one person came to the table with a common approach that we could all share--something that would bring us together for a brief moment in history.

It won't work said the restafarians! It doesn't describe everything, said everyone else. However, for those who understood, it was a start. It was an approach that allowed us to easily define the value we were trying to deliver in the API space. Something that turned out to be a seed for so much more, in a seemingly rich API world, but one that in reality was toxic to anything that was actually sustainable. Somehow this environment, one individual managed to establish a little back and forth motion, that over time would pick up momentum, setting a rhythm everyone else could follow when defining, sharing, collaborating and building tooling online.

We all had a little swagger...

For a brief moment, we were all walking together, blindly following the lead that was given to us, while also bringing our own contribution, adding to the momentum day by day, month by month. Somehow we had managed to come together and step in sync, and move in a single direction.

This has all come to and end, as the beasts of business have had their way. There will be no open--just products that call themselves open. There will be no tune for us all to march to, whether you are developer or not. We just have acronyms that only hold meaning to those within the club.

The fun times are over. The rhythm has ceased. Money and business has won over community and collaboration, but for a brief moment we all had swagger in the API space.



from http://ift.tt/1P5fuym

Monday, November 23, 2015

WHy You May Not Find Me At The Bar After Tech Events

When you see me at conferences, you might notice that I am often very engaged while at the event. However after the event lets out, and everyone heads off to the bar or pub, you may not find me tagging along anymore. You see, I am finding it increasingly hard to be present, because of one thing--my hearing. 

You may not know this, but I am completely deaf in my left ear, and only have around 30% left in my right ear. This was the findings in a hearing test I had done in 2007, and I'm assuming by 2015 I have lost even more. Historically I have done pretty well, with a mix of reading lips, and piecing together what words I do hear, but this year I am finding it increasingly difficult to make things work.

As I live longer with my hearing impairment, I find one side effect, is that I tend to feel sounds more than I hear them, and when I'm in loud bars, I tend to feel everything, and hear nothing. This results in me feeling and hearing the loud noises, but actually understanding none of what people around me are saying to me. Overall this is really overstimulating, and after spending a day at a conference is can be very difficult for me, leaving me able to handle no more than maybe an hour or two in loud environments.

I also noticed a couple times recently where people were talking to me, usually on my left side and I did not notice, resulting in confusion, but then when I hear only portions of conversations, and I seem uninterested (as I do not know what is going on), people seemed a little offended--if this was you, I am truly sorry.

I understand not everyone who hangs out with me at events will read this, but I wanted to write it anyways, and gather my thoughts. I will be ditching out of bars earlier than I have in the past, and I'm hoping the folks who really want to hang out with me will join me in a more quieter setting, where I can hopefully be engaged a little more.

Thank you for understanding.



from http://ift.tt/1Npb2KA

Monday, November 2, 2015

Making Sure My Stories Are Linked Properly

When I craft a story for any of my blogs, I use a single content management system that I custom built on top of my blog API. I'm always looking to make it more valuable to my readers by providing the relevant links, but also make it more discover-able, and link-able within my content management and contact relationship management system.

I keep track of how many times I reference companies and people within articles, and the presence of Twitter accounts, and website URLs is how I do this. So when I am writing articles, it is important that people and companies are linked up properly. Here is an example from a post I just wrote, for the API consumption panel at @APIStrat (it hasn't been published yet).

I have tried to automate this in my CMS, so when it sees someones Twitter account, or a company name that already exists in my system, it recommends a link. However it is still up to me to ok the addition of links to Twitter handles, and company names. I do not like this to be fully automated, because I like to retain full editorial control.

I am just sharing this, so that it gets baked into my operations, and I remember to use the link system more, but also just acknowledge how much work it is to make all of my storytelling work, but that ultimately it is worth it. Formally telling on my blog is how I make sure all of this continues to be a reality across my operations.



from http://ift.tt/1KVlxgY

Tuesday, October 27, 2015

Striking The Right Balance With API Evangelist Partners

I get a lot of requests from individuals and companies who want to "partner" with me, with many meanings to what this actually means. As a one person operation I have to be very careful who I engage with, because it is very easy for large organizations, with more resources to drown me in talk, meetings, emails, and other alternatives to actually doing what it is that I do.

Another big challenge in partnering with new startups is that they often do not have a lot of money, lots of potential value, yet an unproven track record. I love believing in new startups, but in a world where they need me in the beginning, and not so much once they've made, I have to be wary any company that walks in the door. You may be a great bunch of folks, but once you pile on enough investors, and changes in executive management along the way--things change. 

I have a lot of startups, VCs, and other companies who love to engage with me, "pick my brain", "understand my expertise", "get my take on where things are going", "craft their strategy", and many other informal ways to tap the experience and perspective I bring to the table. Honestly I thrive on doing this, but after 5 years, and being screwed by numerous startups and open source projects, I am starting to get very weary about who I help.

I understand you are super excited about your new API idea, or service. I can easily get that way too! You see that excitement will fade once you get traction, become attractive to investors, and your willingness to work with me, share your ideas, tools, and resources with the community will fade. Twitter is the best example of this in the wild. No I didn't help Twitter get going, but I've seen the same syndrome play out with 50+ APIs and service provider startups in the last five years of operating.

Startups offer me equity to alleviate my concerns. Yawwwwn! Means nothing. I have a file in the filing cabinet of worthless options. Others offer me some sort of return on traffic I send to them, and conversions I generate. I guess this is a start, but it doesn't touch on the big partner deals I can often help in bringing to the table and educating, and general awareness I work to build in API sector. Traditional tracking mechanisms will only capture a small portion of the value I can potentially generate for an API startup, and honestly is a waste of our time.

This leaves me with just getting to know startups, and dating, as I like to say, for quite a while, before I engage too heavily. My only defense is to be really public with all my partnerships from day one. State publicly, that I am partnering with company X. Then tag each post, white paper, or research project that I do on behalf of a relationship. Don't get me wrong, I get a lot of value of this work I do, otherwise I wouldn't be doing it. However the line I want to draw in the sand, is just a public acknowledgement that I am helping this company figure out their strategy, tell stories, and help shape their layer of the API space.

This post is just me brainstorming a new tier of my partner program, which I'm thinking I will call "strategy and storytelling partners". When I am approached by, or discover a new API company that I find interesting, and I begin investing time and resources into this relationship, I will publish this company on my list of partners. These relationships almost always do not involve money, and usually involve a heavy investment on my part when it comes to helping them walk through strategy, and storytelling around their products, services, and potentially helping define and evolve the sector they are looking to operate within.

In the end, after several weeks of mulling over this subject I do not see any contractual, or technological solution to tracking how I help API startups in the space--it has to be a human solution. I will almost always share the existence of a partnership with a company from day one, and it is up to me to modulate how much "investment" I give, and as this benefits (or not) the startup, it will be up to the company itself to kick-back to me (and the community) as a payback. If you don't, and you end up taking or keeping the lion share of value generate by my work, and the community I bring to the table, it is on you. The timeline will be there for everyone else to judge. #karma



from http://ift.tt/1GGJ2yY

Monday, October 26, 2015

Taking Another Look At The Tech Blogosphere

I used to be more immersed in the world of top tech blogs, when my partner in crime @AudreyWatters worked at ReadWrite, O'Reilly, and I knew more people on the beat. Over the last couple years, as I keep my laser focus on the API space, and often end up going directly to the sources of API news, I have reduced the the priority of major tech blogs in my feeds, and Twitter stream(s).

During thee planning of @APIStrat, and working with some of my clients at API Evangelist and APIWare, the question of other media sources, destinations, and voices in the tech space keeps coming up. As I'm crafting stories lately, I'm asking myself more and more, where should this be published? And maybe there is more value in some of this content being published beyond just my blogs.

To help me understand the landscape I went through the top tech blogs, and hand crafted a list of them, that I feel are relevant, and willing to sumissions, either as full stories or just as news tips. Here is the version 1.0 of this list:

While there are some other major publications with relevant tech news sections, these 24 represent what I'd consider to be the upper crust of accessible tech blogs, that would entertain tips, news submissions, and contributions from the space.

I'm not naive enough to just submit random stories to the list above, but with the right information, or possibly complete story, submitting to some of them might make sense from time to time. I have given quotes, information, and other contributions to some of these publications already, so I already have some inroads built.

In line with my approach to storytelling in the API space, I will also be working to profile, and built relationships with their editors and writers. I'm already used as the "API correspondent" for some writers at the publications listed above, so adding these companies to my monitoring system, where I can slowly add the writers for each publication, seems like a sensible approach.

It will take time, but I will hopefully be able to expand my information network, to include regular contributing, tipping, and sometimes just pointing more of these tech blogs to where I feel the relevant API stories really are.



from http://ift.tt/1LTuc5q

Friday, October 16, 2015

I Am Stumbling Just A Little Bit, Please Bear With Me As I Find My Way

I've been doing API Evangelist for a while now. Most of the time I can make this work, and honestly sometimes I just fucking rock it. Right now, I keep stumbling and falling on my face. I've written the same amount of posts I usually do, but none of them are worthy of posting.

I also find many of the conversations I engage in, I'm overly aggressive--which goes against what I'm about. I'm not apologizing for anything, cause I would ever do anything I don't back up 100%. I just am not my usual self, and I am having trouble figuring out why.

It would be easy to blame some corporate forces that are just pissing me off right now, and people co-opting my work, without any recognition. Honestly this has happened throughout the last five years, and I have nobody to blame but myself.

I always find a way to work through the doldrums, finding my way back tot he center. This particular moment the currents seem a little swifter than normal, and I cannot figure out why. I trust that I will figure this shit out, I just wanted to put out there that I'm working on it.

I cherish my readers, and thrive on shedding light on what is going on. I hope this is just me, and not a signal of what is to come. It is easier if I'm to blame. ;-) See you on the flipside.



from http://ift.tt/1LpJOx4

Tuesday, September 8, 2015

Portable API Driven Or At Least JSON Driven Interactive Visualization Tooling

As I am working on the API and JSON driven visualization strategy for my Adopta.Agency open data work, I saw cloud monitoring platform Librato, publish their new "Space" interface as a Heroku add-on. I like dashboards and visualization tooling that can live on multiple platforms, and engineered to be as portable and deployable as possible.

In a perfect world, infographics would be done using D3.js, and would all show their homework, with JSON or API definitions supporting any visualizations. All of my Adopta.Agency projects will eventually possess a simple, embeddable, D3.js visualization layer that can be published anywhere. Each project will have its JSON localized in the publicly available Github repository, and be explorable via any browser using Github Pages.

The Librato approach reminded me that I'd also like to see modular, containerized versions of more advanced tooling, dashboards, and visualizations around some projects. This would only apply in scenarios where a little more compute is needed behind the visualizations, that could be done with simple D3.js + JSON, hosted on Github. Essentially giving me two grades of portable visualization deployment: light and heavy duty. I like the idea that it could be a native add-on, whereever you are deploying an open API or dataset.

I still have a lot of work to do when it comes to the light duty blueprint of JSON + D3.js, and API + D3.js, to support Adopta.Agency. I will focus on this, but keep in mind doing modular cloud deployments using Docker and Heroku for the datasets that require more heavy data lifting.



from http://ift.tt/1UFBYUn

Saturday, September 5, 2015

Pushing Forward Algorithmic Transparency When It Comes To The Concept Of Surge Pricing

I've been fascinated by the idea of surge pricing, since Uber introduced the concept to me. I'm not interested in it because of what it will do for my business, I'm interested because of what it will do for / to business. Also I'm concerned what this will do the layers of our society who can't afford, and aren't able to keep up with this algorithmic meritocracy we are assembling.

While listening to my podcasts the other day, I learned that Gogo Inflight wifi also uses surge pricing, which is why some flights are more expensive than others. I long suspected they used some sort of algorithm for figuring out their pricing, because some flights I'm spending $10.00 for the flight, and others I m paying $30.00. Obviously they are in search the sweet spot, to make the most money off business travelers looking to get their fix.

Algorithmic transparency is something I'm very interested in, and something I feel APIs have a huge role to play in helping us make sense of just exactly how companies are structuring their cost structures. This is right up my alley, and something I will add to my monitoring, searching for stories that mention surge pricing, and startups who wield this publicly as part of their strategy, as well as those who work to keep it a secret.

This is where my research starts going beyond just APIs, but it is also an area I hope to influence with some API way of thinking. We'll see where it all goes, hopefully by tuning in early, I can help steer some thinking when it comes to businesses approaching surge pricing (or not). 



from http://ift.tt/1JI4ZK1

Being a Data Janitor and Cleaning Up Data Portability Vomit

As I work through the XML, tab & comma separated, and spreadsheet strewn landscape of federal government data as part of my Adopta.Agency work, I'm constantly reminded of how the data being published is often retribution, more than it is anything of actual use. Most of what I find, despite much of it being part of some sort of "open data" or "data portability" mandate is not actually meant to be usable by its publishers.

In between the cracks of my government open data work, I'm also dealing with the portability of my own digital legacy, and working to migrate exported Evernote notes into my system, as well as legacy Tweets from my Twitter archive download. While the communicated intent of these exports from Evernote and Twitter may be about data portability, like the government data, they really do not give a shit about you actually doing anything with the data.

The requests for software as a service providers, and government agencies to produce open data versions of our own user or public data, has upset the gate-keepers, resulting in what I see as passive aggressive data portability vomit--here you go, put that to use!! A president mandating that us database administrators open up our resources, and give up our power? Ha! The users who helped us grow into a successful startup, actually want a copy of their data? Ha! Fuck you! Take that!

This is why there is so much data janitorial work today, because many of us are playing the role of janitor in the elementary school that is information technology (IT), and constantly coming across the data portability vomit that the gatekeepers of legacy IT power structures (1st and 2nd graders), and the 2.0 silicon valley version (3rd and 4th graders), produce. You see, they don't actually want us to be successful, and this is one of the ways they protect the power they perceive they possess.



from http://ift.tt/1KxseMw

Sunday, August 16, 2015

Legacy Power and Control Contained Within The Acronym

As I wade through government, higher educational, and scientific research, exposing valuable data, and APIs, the single biggest area of friction I encounter is the acronym. Ironically this paradigm is also reflected in the mission of API Evangelist -- helping normal people understand what the hell an Application Programming Interface is. I live in a sort of tech purgatory, I am well aware of it. 

The number one reason acronyms are used I think, is purely because we are lazy. Secondarily though, I think there is also a lot of legacy power and control represented in every acronym. These little abbreviated nuggets can be the difference between you being in the club, or not. You either understand the technology at play, or you don't. You are in the right government circles, or not. You are trained in a specific field, or you are not. I don't think people consider what they wield when they use acronyms, I think there is a lot of baked in, subconscious things going on.

One of the most important aspects of the API journey in my opinion, is that you begin to unwind a lot of the code (pun intended) that has been laid down over the years of IT operation, government policy, and research cycles. When you begin to unwind this, and make available via intuitive URL endpoints, you increase the chances a piece of data, content, or other digital resource will get put to use--something not all parties are actually interested in. Historically IT, government, and researchers wield their power and control, but locking up valuable resources, playing gatekeeper of who is in, and who is out--APIs have the potential to unwind this legacy debt.

APIs do not decode these legacy corporate, government, and institutional pools of power and control by default. You can just as easily pay it forward with an API gateway, or via an API architect who sees no value in getting to know the resources they are putting to work, let alone it's consumer(s). However if done with the right approach, APIs can provide a rich toolbox that can assist any company, institution or government agency in decoding the legacy each has built up.

You can see this play out in the recent EPA, er I mean Environment Protection Agency work I did. Who would ever know that the EPA CERCLIS API, was actually the Comprehensive Environmental Response, Compensation, and Liability Information System API? You don't unless you are in the club, or you do the heavy lifting (clicking) to discover the fine print. I am not saying the person who named the Resource Conservation and Recovery Act Information API, the RCRAInfo service, were malicious in what they are doing--this type of unconscious behavior occurs all the time.

Ultimately I do not think there is a solution for this. Acronyms do provide us with a lot of benefit, when it comes to making language, and communication more efficient. However I think, just like we are seeing play out with algorithms, we need to be more mindful of the legacy we paying forward when we use acronyms, and make sure we are as transparent as possible by providing dictionaries, glossaries, and other tooling. 

At the very least, before you use an acronym, make sure your audience will not have to work extra hard to get up to speed, and do the heavy lifting required to reach as wide as possible audience as you possibly can. It is the API way. ;-)



from http://ift.tt/1fj97aj

Saturday, August 15, 2015

Asking For Help When I Needed To Better Understand The Accounting For US Federal Budget

As I was working my way through the data for the US federal budget, I noticed a special row in between the years 1976 and 1977. It simply had the entry TQ, and no other information available about what it was. 

To get an answer regarding what this entry was, I went to my Twitter followers:

Then, because I have the most amazing Twitter followers ever, got this response from Stephen H. Holden (@SteveHHolden):

When doing any open data work, you can't be afraid to just ask for help when you hit a wall. I've been doing data work for 25 years, and constantly hit walls when it comes to formatting, metadata, the data itself.

The moral of this story is use your Twitter followers, use your Facebook and LinkedIn followers, and make sure and publish questions as a Github issue--then always tell the story!



from http://ift.tt/1TGSWXl

Friday, August 14, 2015

Stepping Up My Open Data Work With Adopta.Agency, Thanks To Knight Foundation, @3Scale, and @APISpark

I always have numerous side project cooking. Occasionally I will submit these projects for potential grant funding. One of my projects which I called Federal Agency Dataset Adoption, was awarded a prototype grant from the Knight Foundation. It was the perfect time to get funding for my open data work, because it coincided with the Summer of APIs work I'm doing with Restlet, and work already in progress defining government open data and APIs with 3Scale.

After reviewing my Federal Agency Dataset Adoption work, I purchased a domain, and quickly got to work on my first two prototype projects. I'm calling the prototype Adopta.Agency, and kicking it off with two projects that reflect my passion for the project.

US Federal Budget
This is a project to make the US federal budget more machine readable, in hopes of building more meaningful tools on top of it. You can already access the historical budget via spreadsheets, but this project is work to make sure everything is available as CSV, JSON, as well as an active API.

VA Data Portal
This project is looking to move forward the conversation around VA data, making it more accessible as CSV and JSON files, and deploying simple APIs when I have the time. The VA needs help to make sure all of its vital assets are machine readable by default.

The first month of the project will be focused on defining the Adopta Blueprint for the project, by tackling projects that my partner in crime Audrey Watters (@audreywatters), and I feel are important, and set the right tone for the movement. Once the blueprint is stable, we ill be inviting other people into the mix, and tackle some new projects.

Adopta.Agency is not a new technology, or a new platform, it is an open blueprint that employs existing services like Github, and tools like CSV to JSON converter, to help move the government open data movement forward just one or two steps. The government is working hard, as we speak, to open up data, but these agencies don't always have the skills and resources to make sure these valuable public assets are ready for use in other websites, applications, analysis and visualizations--this is where we come in!

With Adopta.Agency, we are looking to define a Github enabled, open data and API fueled, human driven network that helps us truly realize the potential of open data and APIs in government -- please join in today.



from http://ift.tt/1DTylYx

Being The Change We Want To See In Open Government Data With Adopta.Agency

I have had a passion when it comes to open data for a number of years. Each time the federal budget has come out in the last 10 years, I would parse the PDFs, and generate XML, and more recently JSON, to help me better understand how our government works. I've worked hard to support open data and APIs in the federal government since 2012, resulting in me heading to Washington DC to work on open data projects at the Department of Veterans Affairs (VA) as a Presidential Innovation Fellow (PIF)

I understand how hard it is to do open data and APIs in government, and I am a big supporter of those in government who are working to open up anything. I also feel there is so much work left to be done to augment these efforts. While there are thousands of datasets now available via Data.gov, and in the handful of data.json files published by federal agencies, much of this data leaves a lot to be desired, when it comes to actually putting it to use.

As people who work with data know, it takes a lot of work to clean up, and normalize everything--there is just no way around this, and much of the government data that has been opened up, still needs this janitorial work, as well conversion into a common data format like JSON. When looking through government open data you are faced with spreadsheets, text files, PDFs, and any number of other obscure formats, which may meet the minimum requirements for open data, need a lot of work to get it truly ready for use in a website, visualization, or mobile application.

Adopta.Aency is meant to be an open blueprint, to help target valuable government open data, clean them up, and at a minimum, convert them to be available as JSON files. When possible, projects will also launch open APIs, but the minimum viable movement forward should be about cleaning and conversion to JSON. Each project begins with forking the Adopta Blueprint, which walks users through the targeting, cleaning, and publishing of data to make it more accessible, and usable by others.

Adopta.Agency employs Github repositories for managing the process, storage and sharing of data files, while also acting as gateway for accessing the APIs, and engaging in a conversation around how to improve upon data and APIs available as part of each project (which is what APIs are all about). Adopta is not a specific technology, it is a blueprint for using commonly available tools and services, to move government open data forward one or two steps. 

We feel strongly that making sure government open data available in a machine readable format, can be a catalyst for change. Ironically, even though this data and APIs are meant for other computers and applications, we need humans to step up, and be stewards of an ongoing portion of the journey. Government agencies do not have the skills, resources, and awareness to do it all, and when you actually think about the big picture, you realize it will take a team effort to make this happen.

Adopta.Agency is looking to define a Github enabled, open data and API fueled, but ultimately human driven network to help everyone realize the potential of open data and APIs in government -- please join us today.



from http://ift.tt/1KmmeAe

Thursday, August 13, 2015

Forget Uber, If You Build A Platform That Feeds People Like This, Then I Will Care

I was listening to the To Cut Food Waste, Spain's Solidarity Fridge Supplies Endless Leftovers segment on NPR today, which made me happy, but then quickly left me sad regarding 99% of the tech solutions I see being developed today. The tech sector loves to showcase how smart we all are, but in the grand scheme of things, we are mostly providing solutions to non-problems, when there is a world full of real problems needing solved.

I remember being at MIT for a hackathon a couple years back, where when we were done with the catered food for our event, the food was taken down to a corner in a hallway, that had a table, and a webcam. After putting the bagels, pizza, juice, and other items on the table, within about 20 minutes, it was gone--students fed, and food not wasted. #winning

The solidarity fridge idea reminded me of this experience, and it makes me sad that there is not an Uber for fucking feeding people! Why the hell isn't there a solidarity fridge and pantry on every street corner in the world? Why don't we have Uber for this idea? Why aren't there food trucks doing this? Oh, because there is no fortune to be made on actually making sure people are being fed, and Silicon Valley really doesn't give a shit about solving real problems, it is just what we tell ourselves so we can sleep at night.

If you are building a platform that helps neighborhoods manage their solidarity fridge and pantries, complete with API, mobile and web apps, and SMS push notifications, then you will see me get real excited about what you are doing--until then...



from http://ift.tt/1Ne4vjO

Wednesday, July 22, 2015

Micro Attempts At Being The Change I Want To See in Government

One by-product of being as OCD as I am, is that I am always looking for the smallest possible way that I can help grease the wheels of the API economy. A big part of helping the average person understand any company or API, is possessing a simple image to represent the concept, either a screenshot, logo, or other visualization. A picture is worth a thousand words, and as essential to API operations, as your actual service.

As I worked to understand the agencies that power our federal government, I quickly realized, I needed a logo for each of the 246 federal agencies--something that didn't exist. I could find many on Wikipedia, and Google for the others, but there was no single source of logos for federal agencies--even at the Federal Government Directory API from USA.gov. Unacceptable, I created my own, and published to Github. 

Ultimately, I am not happy with all of the logos I found, and think it can be greatly improved upon, but it provides me with a base title, description, and image for each of our federal agencies. It is something you can find in the Github repository for my federal government API research, and a JSON representation of all federal agencies + logos under the data folder for the research.

It took me about 6 hours to do this work, and it is something I know has been used by others, including within the federal government, as well as across numerous of my own API research, and storytelling. These are the little actions I enjoy inflicting, helping to wield APIs, and machine readable, meaningful, openly available, micro data-sets that can be used in as many scenarios as possible. Logos might seem irrelevant in the larger open data war, but when it comes to the smaller skirmishes a logo is an important tool in your toolbox.



from http://ift.tt/1SD5r0a

Friday, July 3, 2015

Use Of APIs By Regulators To Audit Telco Behavior

I keep reading stories about federal regulators investigating, and issuing fines to telcos like AT&T paying $105 million for unauthorized charges on customer bills, and Verizon and Sprint to pay $158 million for cramming charges on customers' bills. Maybe I am biased (I am), but I can't help think about the potential for APIs, and OAuth to help in this situation.

As an AT&T, and Verizon customer, I can say that I could use help in auditing my accounts. I'm sure other users would pay for a service that would help monitor their accounts, looking for irregularites. I think about services like Cloudability, that help me manage costs in my cloud computing environement--why aren't there more of these things in the consumer realm?

If all services that are available online simply had APIs for their accounts, this would be possible. It would also open up the door for government agencies, and public sector organizations to step up and provide research, auditing, and potentially data counseling for the average citizen and consumer. 

I want more access to the data I generate via the telcommunication companies. I also want to be able to take advantage of services that help me manage my relationships with these companies. I also think there should be a certain amount of regulatory acess and control introduced into all of this, and APIs provide not just a programmatic way to do this, it can be done in a real-time way, that might provide the balance we need--rather than just every few years when the feds have the information they need, and the enforcement power they need to take action.



from http://ift.tt/1H5wvCf

Friday, June 19, 2015

A Better Understanding When It Comes To Licensing Of Data Served Up Through APIs

Through my work on API Evangelist, and heavy reliance on Github, I have a pretty good handle on the licensing of code involved with APIs--I recommend following Githubs advice. Also derived from my work on the Oracle v Google copyright case, and the creation of API Commons, I have a solid handle on licensing of API interfaces. One area I am currently deficient, and is something that has long been on my todo list, is establishing a clear stance on how to license data served up via APIs.

My goal is to eventually craft a static page, that helps API providers, and consumers, better understand licensing for the entire stack, from database, to server, the API definition, all the way to the client. I rely on the Open Data Commons, for three licensing options for open data:

I am adding these three licensing options to my politics of APIs research, and will work to publish a single research project that provides guidance in not just licensing of data served up through APIs, but also addresses code, definitions, schemas, and more. 

The guidance from Open Data Commons is meant for data owners who are looking to license their data before making available via an API, if you are working with an existing dataset, makes sure and consult the data source on licensing restrictions--making sure to carry these forward as you do any additional work.



from http://ift.tt/1L7yrwO

Monday, May 11, 2015

On Encountering Skeptical Views Around Open Data

I spend a lot of time talkng about open data in business, and government of all shapes and sizes. This topic was front and center at APIDays Berlin / APIStrat Europe, and APIDays Mediterranea. Open data was a part of numerous talks, but most importantly dominated conversations in the hallways, and late into the night at the drinking establishments we gathered.

In my experience there are four camps of people when it comes to open data:

  1. Those who know nothing about open data
  2. Those who don't know much, but have lots of opinions
  3. Those who have experience, and over promise the results
  4. Those who have experience, and get hands dirty

I'd say overwhelmingly the people I met in my latest travels were in the first bucket, or the fourth bucket. However I did meet a handful of folks who I put in the second bucket, and were very dismissive of the potential of open data. In my experience these people either listened to the rhetoric of people in bucket three, or just don't have the experience that many of the rest of us have.

I agree that the state of open data our of city, state, and federal level government programs is often lacking much of what we'd like to see in a healthy, mature program. What I feel skeptics miss, is hands on experience making this happen in government (this shit is hard), and a willingness to help take things to the next level. This takes an effort from all of us, not just the people in government--there is a lot you can do from the outside to help make things better (not just criticize).

It feels like we are getting past a lot of the damage created by early open data rhetoric, that I felt over-promsied, and under-delivered. Something we have to learn from in future storytelling. I don't feel like all open data skeptics, and critics are required to get their hands dirty, but I guarantee if you work on a couple of hands on projects--your views will change.



from http://ift.tt/1dYmyMY

Tuesday, May 5, 2015

Shhhhh, Be Very Very Quiet, I Am Hunting Mainsplainers

For the most part I ignore the bullshit that flows in my girlfriend @audreywatters Twitter timeline (yes I am watching). We both tend to write some pretty critical things about technology, but for some reason (hmmm, what could it be), her timeline is full of some pretty vocal "dudes" looking to set her straight. I just do not have the energy to challenge every sexist male, looking to tell her she is wrong, but every once in a while I just need to vent a little--so I go hunting mansplainers in her Twitter timeline. 

One young white fellow, wins the prize this week, he got my attention, resulting in a conversation that ended in this response:

Yeah, the days she was writing that, and were discussed all the details, gave me no insight into the logic, let alone the last five years of discussing this topic with her. During my mainsplainer hunting, I'm not out to convince these dudes of how out of line they are, honestly I'm just looking to fuck with them, and let them know I'm here. I do not know the answer to helping us sexist men learn the error our ways. Yes, even I have sexist tendencies--only difference is that I am well on my way to learning. You see I am white, male, and even though I grew up very poor, raised by a single mother, I still have enjoyed a very priveleged existence for most of my life. 

I could easily cherry pick specific Tweets from this dude, showing his flipping flopping nature, where he blames Audrey for specific things he can't actually cite in her post, and talks of her blaming these other men he's defending for doing what he claims as sinister things, wait no sinister was his reference in Twitter conversation with someone else. No wait, the last paragraph in her post alludes to this. I just need to be able to follow the Twitter thread to understand his point. Why am I so dense?

Look, I don't give a shit buddy. I'm just fucking with you because you are spouting stupid shit in her timeline. I really don't give a fuck where you are coming from. If you knew the number of dudes I've seen tell her how wrong she is, to she needs to shut the fuck up, to hacking my websites and telling me to keep her in line, you'd go away pretty quickly (you are in good company). You need to tune into the bigger conversation, and not feel the need to tell women they are wrong. The reason you feel this way is you don't see her as expert because she is a woman. Period. 

What people like you should do is, write a response on your own blog, in your domain, and reply simply with "here are my thoughts". Then you can lay out all the detail you need, cite your own sources, and hopefully do as much work as she did when crafting her story. Then if she cares (she won't), she can reply on her blog, and return the favor to you. I know what you are going to say, oh I can't even open my mouth without mainsplaining? Probably not. You are clueless of the bigger picture, except the view from your own position.

I'm not saying everything that Audrey says is right, but I am saying you need to step back, and analyze your approach. One thing I've learned during my time running a business with my ex-wife, and the amazing five+ years I've spent with Audrey, is there is more to this, then us men can ever imagine. I disagree with a lot of things I read online, most of them I do not ever respond to, and the things I do, I critically evaluate how I respond--I just do not vomit my priveleged position into people's timeline. 

I know, futile effort. I can never change these types of people's behavior, but I just can't help hunting the mainsplainers in her timeline, and vent, while letting them know I'm sitting by her side. If you have any other comments or questions, please read Is My Girlfriend Bothering You?

Thanks!



from http://ift.tt/1QiriLg

Thursday, March 12, 2015

I Have Gotten More Return On the Ideas I Have Set Free Than Any I Have Locked up

When I walkthrough the junkyard of startups, and business ideas in my mind, I can’t help but feel that much of my current success with API Evangelist has more to do with open ideas, than it does any other aspect. I have numerous startups under my belt, where I tried to capitalize on ideas I've had, ranging from selling wine online to real estate data, but nothing like what I'm doing now.

Do not get me wrong, I’ve had a lot of success along the way, but nothing that compares to the feelings of success I have with API Evangelist. Other than right place, and at right time, I cannot come up with much that is different from API Evangelist, and previous work—except the fact that API Evangelist is focused on opening up and freeing every idea that comes along.

This type of approach to business might not be right for everyone. I’m sure I’ve also passed up on some pretty lucrative opportunities to monetize around my ideas, but in the end, I mostly enjoy making enough money to get by, and generating as much positive exhaust around my ideas as I can. I'm not saying all business needs to think like this, but the more API-centric your business is, I think the more you have to consider the repercussions of locking up ideas vs. setting them free.



from http://ift.tt/1wBIW70

Monday, February 23, 2015

Making Sense At The100K Level: Twitter, Github, And Google Groups

I try to make sense of which companies are doing interesting things in the API space, and the interesting technologies that are done by these companies, that sometimes take on a life of their own. The thing I wrestle with with constantly, is how do you actually do this? The best tools in my toolbox currently are Twitter and Github. These two platforms provide me with a wealth of information about what is going on within a company, or specific project, the surrounding community, and the relationships they have developed (or not), along the way.

Recently I’ve been spending time, diving deeper into the Swagger community, and two key sources of information are the @swaggerapi Twitter account, and the Swagger Github account, with its 25+ repositories. Using each of these platform APIs, I can pull followers, favorites, and general activity for the Swagger community. Then I come up against the SwaggerSocket Google Group. While there is a rich amount of information, and activity at the forum, with a lack of RSS or API, I can’t make sense of the conversation at a macro level, alongside the other signals I’m tracking on—grrrrrr.

At any time I can tune into the activity on Twitter, and Github for the Swagger community, but the Google Group takes much more work, and I have to go to the website to view, and manually engage. Ideally I could see Twitter, Github, and Google Group activity side by side, and make sense of the bigger picture. I can get email updates from the forum, but this applies from now forward, and gives me no context of history of the conversation within the group—without visiting the website.

Just a side rant from the day. This is not a critique of the Swagger community, just an outside view on the usage of Google Groups as an API community management tool. I use the platform for APIs.json and API Commons, but I think I might work on a better way to manage the community, one that allows outside entities to better track on the conversation. 



from http://ift.tt/1Eqi2jU

Sunday, February 8, 2015

Emails From People Saying Nice Things And Not Wanting Anything From Me

I process my thoughts through stories on my blogs, and often times you'll find me bitching about people and companies here on kinlane.com. Other times you'll find me waxing poetic about how nice people can be—welcome to my bipolar blogging world.

In this post, I want to say how much I like finding nice emails from people in my inbox, especially when they don’t want anything from me. Getting these nice notes from people, about specific stories, topics, or just generally thanking me for what I do, makes it all worth it.

Ok, I'll stop gushing, but I just wanted to say thank you—you know who you are.



from http://ift.tt/1A91Cee

Friday, February 6, 2015

An Archive.org For Email Newsletters Using Context.io

I’m not going to beat around the bush on this idea, it just needs to get done, and I just don’t have the time. We need an archive.org for email newsletters, and other POP related elements of the digital world we have created for ourselves. Whether we love or hate the inbox layer of our life, it plays a significant role in crafting our daily reality. Bottom line, we don’t always keep the history that happens, and we should be recording it all, so that we can pause, and re-evaluate at any point in the future.

I cannot keep up with the amount of newsletters flowing into my inbox, but I do need to be able to access this layer, as I have the bandwidth available to process. Using Context.io, I need you to create an index of popular email newsletter indexes that are emerging. I feel like we are seeing a renaissance in email, in the form of the business newsletter--something I don't always have the time to participate in.

During the course of my daily monitoring, I received an email from Congress.gov, about a new legislative email newsletter, something that seems like something I’d be interested in, but then immediately I’m questioning my ability to process the new information:

  • A specific bill in the current Congress - Receive an email when there are updates to a specific bill (new cosponsors, committee action, vote taken, etc.); emails are sent once a day if there has been a change in a particular bill’s status since the previous day.
  • A specific member’s legislative activity - Receive an email when a specific member introduces or cosponsors a bill; emails are sent once a day if a member has introduced or cosponsored a bill since the previous day.
  • Congressional Record - Receive an email as soon as a new issue of the Congressional Record is available on Congress.gov.

This is all information I’m willing to digest, but ultimately have to weight it alongside the rest of my information diet—a process that isn’t always equitable. If I could acknolwedge an email newsletter as something that I’m interested in, but only when I had time, I would be open to the adoption of a new service.

We need to record this layer of our history, something that our inboxes just aren’t doing well enough. I think we need a steward to step up, and be the curator of this important content that is being sent to our inboxes, and doesn’t always exist on the open Internet. Technically I do not think it would be too difficult to it using Context.io, I just think someone needs to spend a lot of time signing up for newsletters, and being creative in crafting the interface, and index for people to be able to engage with in meaningful ways, that people will actually find useful and pay for.



from http://ift.tt/1KoYwWh