Monday, May 2, 2016

Happy To See Unsustainable Free Access To Valuable Tooling Go Away

I was talking my friend Dan Cundiff about Page2RSS shutting down, and the viability of offering up tools like this for us mere mortals to use in our every day work.

If you aren't familiar with what Page2RSS does, it is a simple tool that takes a static website, and turn it into an RSS feed for you. A valuable service for those websites who do not understand the importance of RSS, but unfortunatley is a tool that has gone dark as of today.

Page2RSS is one of those valuable tools, that is more feature, than a actual thing all by itself. These types of tools really don't take much to keep alive and running, something you can scale using AWS or other cloud infrastructure, but only if you have an actual business model, and customers who are willing to pay for it.

The problem is, the tone has been set for the last 10 years, that free is how you do things. A concept that has been led by tech giants like Google, and wave after wave of VC investment--setting an unrealistic expectation that thins should be free. Providers of simple tools like Page2RSS feel that if they are going to compete they will have to be free, even if they can't afford it. Something that then results in consumers of simple tools like Page2RSS thinking things should be free, because if it is not, they'll go find one that is--establishing a very unsustainable cycle.

As the tech giants shutter more of their free services, and VC investment focuses on the enterprise, maybe the bar will be raised to a more realistic place. One where tooling providers can accept micro payments for the tooling and services they provide, and consumers can begin to come back to reality, and realize it takes money to develop and support these valuable tools, making them more willing to cough up some change to pay for the valuable services and tooling they depend on.



from http://ift.tt/1NQ5Yj9

Saturday, April 23, 2016

The Potential Of Jekyll As A Static Data Engine

I am an old database guy. I got my first job working on databases in COBOl in 1987. I have worked with almost every database platform out there, and I love data. I remember writing my own indexes, relationships, and other things we take for granted now. I remember being religious and dogmatic about the platforms I used, which included FoxPro and eventually Microsoft SQL Server. I have grown out of any dogma for any platform, tool, or specific approach, but I continue to manage quite a bit of data for my personal and professional pleasure.

Data is core to API Evangelist, and my API industry research. Even though I still have an Amazon RDS MysQL core to my data operations, this centralized approach is slowly being cannibalized by a more distributed, static, JSON and YAML, and Jekyll driven vision. Increasingly my data is living in the _data folder of each static project repo, being hosted on Github Pages, as well as some specialized Linux Jekyll EC2 deployments I am working with. I do not think this will ever be something that entirely replaces my old, more centralized approach, but it has grown to be a significant portion of my operations.

There are many issues with this approach, keeping things up to date, providing a comprehensive search, and other things are still challenges for me. However, the static nature of both the UI, and the data layer for this projects is proving to have benefits that far outweigh the challenges. The openness and availability of the data and content that drives all my research, project sites, and tooling is refreshing for me. I'm also enjoying having hundreds of very static, cached, distributed websites, and tools that don't change -- unless I direct them to with a publish or a push.

One area I am still exploring in all of this, is the pros / cons of delivering UI experiences with pure JavaScript which consumes the JSON, a more heavy Liquid experience which also consumers the JSON, or take it to new levels with a combination of the two. Some of the stuff I'm doing with an APIs.json and OpenAPI Spec driven documentation which uses Liquid, and JavaScript feels liberating as an approach to delivering developer experiences (DX). If you haven't played with _data folder + Liquid in Jekyll, I highly recommend it--it is a different game.

Anyways, I haven't had much time to talk about this shift in my data management approach, so I wanted to capture some of my current thoughts about the potential of Jekyll as a static data engine--we will see where it goes for me.



from http://ift.tt/1NHVVXZ

Tuesday, April 19, 2016

Upstream Transparency Is A Serious Concern But Downstream Not So Much

My ongoing stories around how APIs can help make business, educational, healthcare, government, and the other transactions that we are increasingly seeing move online, be more transparent--enjoy a regular amount of snickers, subtweets, and laughs from folks in the space. Whether it is about opening up access to our social media dtaa, all the way up to how our home or student loans are hammered out, to how international trade agreements are crafted--I feel APIs have a role to play when opening things up (yeah I know I'm biased). 

With that said, APIs are NOT the answer in all scenarios. I do not see them as the aboslute solution, but they do offer promise when it comes to giving individuals a voice in how social media, financial, healthcare, and other data and content is used, or not used. A lot of people who are part of the startup machine love to laugh at me, pointing out how uninformed I am about how things work. We have to keep things private, and in the hands of those who are in the know--this is how business works, and everything would come to a screeching halt if we forced everything to open up.

While there is a wide variety of motivations behind why people poke at me about my persepective, I am noticing one important thing. They disagree with me when I point out things that are downstream from them, but they agree with me when it is upstream from them. Meaning if I focus on federal agencies like the IRS, or possibly banking on Wall Street, and other regulatory and legal influences -- they are all about transparency. If it is downstream from them with their existing customers, or would be customers for their future startups --hell no, transparency will always hurt the things. Just leave it to the smart, cool kids to make the important decisions, the average person shouldn't worry there little head about it..

This is privilege. You are unable to see people below the position you enjoy in society. You have an overinflated sense of what you are capable of, and that you will always do right by everyone  You possess a attitude that all your customers, and users who will be using your platform and tooling should just be thankful that you are so smart, and built this amazing platform for everyone to use. Please do not bother us elite with concerns privacy, security, and well being. We know best! They have things under control. However those bankers building the banking platform we depend on, people runnign those higher educational institutions, or those nasty regulators who craft the platform we are required to report business operations -- they can't be trusted! #TRANSPARENCY

Ok. Done ranting. Let me close with saying that transparency does not equal good. Transparency can be very bad, and I am not prescribing it in all situations. I am just pushing back on those who push back on me when I ask for transparency in the platforms in which I depend on for my business. Please do not try and disarm me with absolutes. I'm just asking for transparency in the business and political dealings that directly impact me as an individual, and my business. While also suggesting that more tech folks work to understand that there is a downstream impact from their world, they might not see, because of the priveleged position you enjoy, so please thinking a little more deeply as you race to get what is your.



from http://ift.tt/1SRuU5B

Saturday, April 16, 2016

I Severely Overestimated Peoples Desire To Deliver Real Solutions When It Came To APIs

I have made a lot of mistakes, and made plenty of totally incorrect assumptions about the API space in my last five years trying to help seize this narrow window of opportunity I feel we have, when it comes to defining our digital self--using APIs. As business leaders, IT, and developer practitioners, APIs provide us a big opportunity to understand our digital assets, better define them, and open up access to them.

This vision of what we can do when it comes to APIs is a pretty powerful one to consider, but is also something that quickly morphs, distorts, and becomes something entirely different when humans are involved--sometimes with positive outcomes, but increasingly they are negative. I'm feeling like I severely overestimated people's willingness to truly want to deliver solutions using APIs. Whether it was the API consumer, provider, or the service provider delivering their warez to the space -- the majority of folks seem most interested in their own selfish need.

While I often work to paint an optimistic view of all actors involved in the production that is the API space, here is how things are breaking out in my mind:

  • API Providers - The average company, and business really does not have any interest in properly getting to know, and do the hard work necessary to define its digital self, and is perfectly happy just buying into the next wave of techno-solutionism, never really ever actually make any true change.
  • API Consumers - Really have no interest in getting to know the API provider, are looking to get something for free, and will to sign up for multiple accounts, and other behavior that really is about extracting as much value as they can, and give nothing in return.
  • API Service Providers - Could care less of the quality of API implementations, as long as API providers are using their solutions, ideally with a 2-3 contract, so we can meet our numbers--we have the techno-solutions that you need to not make any change.
  • Investors - The longer I spend in the space, the more the strings of the investors become evident. These strings are almost always at odds with anything that truly make API work, like trust in a provider, transparent business models, and a sensible road map.

Y'all deserve each other in my opinion. The problem is you are all gumming up any forward motion we were actually enjoying. I'm hearing business folk talk about APIs like they are the next dashboard and analytics, a sort of catch-all solution to the problem of the day. I'm watching providers, service providers, and investors chase the money, and not actually investing in what is needed to actually do APIs right. 

In short, there is a lot of money to be made, and to be spent when it comes to APIs -- that will do absolutely nothing to provide your company with real world solutions. You have to make sure an invest in the right people within your organization. People who will own the API vision, and help build internal capacity to help your organization understand its digital self, and deliver API driven solutions correctly. You do need services, and tooling to make this happen, but the core of it will be your people doing the hard work to define your core business value, while mapping out the digital version of your organizational, the bits and bytes that make this happen, and the other human stakeholders that are involved. 

I am just venting, as I continue see waves of companies walking around with their heads cut off talking API, and spending money on API, and service providers lining up to sell meaningless API solutions to them. I am also seeing investors guiding both API provider, and API service provider in ways that have nothing to do with what is needed to deliver a solution that will make a real impact -- they are just looking to meet the numbers they've set.

As usual, this kind of shit doesn't stop me. I'll keep doing API for myself, and for the small handful of people who want to actually do a better job of defining their digital self, and own the creation, storage, orchestration, publishing, and syndication of that self. While y'all are all over there finding the best API driven way to lock up people's digital self, and extract as much value as you can for you and your partners, I'll be over answer questions for people who want to take control over their own individual professional, and business digital presence.



from http://ift.tt/1TbP2Co

Wednesday, April 13, 2016

It Will Be All About Doing Bots, Selling Picks & Shovels, Providing Platforms, Or The Anti-Bot Technology

As I monitor the wacky world of bots that is unfolding, I am beginning to see some pretty distinct pools of bot focused implementations, which tell me two things: 1) There will be a lot of activity in this space in the coming year & 2) It is going to be a shit show helluva ride. 

If you haven't picked up on it yet, everyone is talking about bots (even me). Its the new opportunity for startups, investors, and consumers! Whether its Twitter, Slack, or any of your other preferred messaging channels, bots are invading. If you believe what you read on the Interwebz, the bot evolution is upon us--let's all get to work.

First, you can build a bot. This is by far the biggest opportunity around, by developing your own sentient being, that can deliver anyone sports stats and financial tips in their Slack channel. Every techie is busy sketching out their bot design, and firing up their welder in the workshop. Of course I am working on my own API Evangelist bot to help do my evil bidding, because I am easily influenced by trends like this.

If building bots isn't in your wheelhouse, then you can always sells the picks & shovels to the bot builders. These will be the API driven dictionaries, file conversion, image filtering, SMS sending, flight lookup, hotel booking, and other picks and shovels that the bot builders will be needing to do what they do. You have a database of election data? Maybe a bunch of real estate and census data? Get to work selling the picks and shovels!

If you aren't building bots, or equipping the people that are, you are probably the Twitters, Slacks, Microsofts, and Facebooks of the world who are looking to be the platforms in which all the bots will operate, and assault the humans. Every platform that has an audience in 2016, will have a bot enablement layer by 2018, or you will go out of business by 2020. Not really, but damn, that sounded so good, I'm going to keep it. 

The best part of diving deeper into the world of bots, is that you don't have to look to far before you realize this is nothing new. There have been chatbots, searchbots, scrapebots, and formbots for many many years. Which really completes the circle of life if you realize there is already a thriving market for anti-bot technology, as demonstrated in this email I received the other day:

I wanted to follow up with you and see if you might benefit from an easy and accurate way to protect your website from bad bots, API abuse and fraud.

With Distil Networks, you can put an immediate stop to competitive data mining, price scraping, transaction fraud, account hijacking, API abuse, downtime, spam and click fraud. 

Website security can be a bit overwhelming, especially with bot operators becoming more sophisticated.

We've put together eight best practices for fighting off bot intrusions to give you a solid foundation and complete understanding when evaluating bot mitigation vendors.

Bots have been delivering value, and wreaking havoc for some time now. With this latest support from platforms, new attention from VC's, and a fresh wave of bot builders, and their pick and shovel merchants, I think we will see bots reach new heights. While I would love for these heights to be a positive thing, I'm guessing with the amount of money being thrown at this, it will most likely be a cringe worthy shit show. 

Along with voice enablement, I will be tracking on the world of bots, partly because I can't help myself (very little self control), but mostly because I think the possibilities are significantly increased with the number of API driven resources that are available in 2106. The potential for more interesting, and useful bot implementations, if done right, is pretty huge when you consider the wealth of high value data, content, and algorithmic resources available available to the average developer. 

I am working to remain optimistic, but will not be holding my breathe, or expecting the best in all of this.



from http://ift.tt/1Sbw6DG

Monday, April 4, 2016

I Am Building More Single Repo Apps

Github has been playing a central role in my existence as the API Evangelist for a number of years now. I began migrating all my side project to use Github Pages back in January of 2013, something I continued to evolve throughout the year, resulting in an approach to running applications entirely on Github Pages and uses the Github master repo as the backend

This approach to building using Github repos, Github Pages is not a design approach you want to use in all use cases, but I'm enjoying using it for some very simple, yet forkable applications:

  • CSV Converter - A CSV to JSON, and back again converter, with ability to store converted data in the master repo via the Github API.
  • RSS Aggregator - A multi-RSS feed aggregator, which pulls RSS feeds of URLs in a central JSON file, then writes the aggregate posts to another JSON file -- with RSS feed of aggregate.

These are just two examples of simple client-side solutions I'm developing. I'm further solidifying my approach to what I was going to just label simply as Single Page Apps (SPA), but really they are more of a Single Repo App (SRA) -- because we need another fucking acronym. Seriously though, I'm going to dial this in, and use to develop an army of apps, that do one thing and does it well.

I like this approach because it allows me to keep the apps I develop small, and easy to design, develop, operate, and then also easily thrown away and /or replaced. Or not throw away, and just let them live on, with minimal maintenance. Part of what makes this possible is that these apps run 100% on Github, with the following architecture:

  • Github Master Branch - This acts as the starting point for every app, using Github as its OS, but also the master branch as its central public or private data store. 
  • Github Pages Branch - This acts as the UI for each application, providing a public, SSL enabled URL for engaging with each application that is made available.
  • Jekyll - This provides content and data management system for the application, allowing for the app to be published, using Markdown, Markup, with a central YAML data store. 
  • YAML / JSON / CSV Data Store - By using Jekyll, you immediately get an _data folder which turns any YAML, CSV, and JSON file into a data store for each application.
  • Liquid - Jekyll employs Liquid for rendering data and content to each site page, allowing for the publishing of any data you have available in the _data folder.
  • Github API - The Github API provides access to all the moving parts of each app that is developed in this way, allowing for programmable control over all architecture.
  • Github.js - A Github API JavaScript client which allows you to enage with the Githbu API within any application using Github Oauth and personal tokens. 
  • Github Oauth - Each app leverages the existing Github authentication infrastructure for managing user profiles, as well as authentication needed to use API for reading and writing of content or data.
  • Github Issues - Each app uses the Github issue management as its central support and communication channel, allowing for engagement with key stakeholders, as well as end-users.

Each application runs 100% on Github, and works to minimize its dependencies on external resources. If it does work with outside elements, it uses RSS, an API, or other open approach to operate. I'm seeing more API providers also leverage Github authentication, when they have API management providers like 3Scale, making authenticated API access a reality, as well as using resources from publicly available APIs.

What I like most about this approach to developing apps, is that they can be forked and run by anyone who uses Github. I watched my GF Audrey Watters fork the RSS Aggregator in about 5 minutes with just a couple of hiccups, which I will smooth out. The goal is to make them easily forkable, minimizing its dependencies, and empowering any users to put to work to solve a single problem they have. (aka convert CSV to JSON, aggregate RSS feeds).

Anytime data or content needs to be written to the underlying data store, you just have to have an OAuth token or personal token issued from a Github account who has permission to the Github repo in which an app operates in. Github makes it easy to generate and manage these within your Github account, but I also use OAuth.io to help streamline this when needed. Ultimately, I try to minimize the dependencies when I can. I want to get better at minimizing all external dependencies, and being more clear about which ones end up needing built in.

I have a whole list of Single Repo Apps I'd like to see become a reality. Now that I'm seeing some agility in my infrastructure after going API first for all my Kin Lane and API Evangelist operations, I'm able to focus more energy on extending this way of life to how I build out tools and apps. This is why I'm formalizing my approach, drawing a line in the sand, and making sure every new UI, tool, or app I build from here forward, is developed in this way. Eventually I'd like every aspect of my UI to operate as Single Repo App, providing me (and anyone else) with a lego box of UI goodness.



from http://ift.tt/1RAJP82

Wednesday, March 30, 2016

Indie Ed-Tech, The University And Personal APIs: Drawing Lines In The Sand To Define Our Digital Self

When you talk about Reclaim Hosting, or Reclaim Your Domain concepts of owning, and operating your domain, and living a POSSE way of life to your average IT or developer folk, they will most often shrug, point to GoDaddy, and let you know how its not a thing. When you have conversations with indie ed-tech folks, the conversation takes on a new form, helping individuals, organizations, and even entire higher educational instiutions, better understand their digital self. 

Ok, hippie, what is a digital self? This is the version of ourselves, our companies, and institutions have been giving birth to, for over the last 20 years of our increasingly online lives. Companies, organizations, institutions, government, and individual citizens are using the web to define who they are, often by having a website, a blog, or an Instagram, Twitter, and Facebook presence. Some companies are better at it than others, with many of the craftiest, convincing everybody else to develop their presence, within their domain -- owning the value of anything being generated.

I discussed one of many examples of this in action, with my story about operating your blog on Medium's platform. This isn't a bad thing, unless you haven't thought about the pros and cons, and are just giving away the value you generate (in this case blog posts), and don't take any steps to retain control over what you create each day. You need to understand where the lines exist in the sand, and take very opportunity you can to redraw those lines, in your favor--you know Ev Williams does this (he's savvy). 

Here is my line in the sand at Medium: 

It is in Medium's interest to get me to publish my stories (exhaust from my writing) here. However in my world, 98% of my writing always begins within my domain(s):

Every single idea for a story actually begins with a single API call to:

Sorry, you have to be authenticated to GET from that url, let alone POST new notes. This is where I flush out my ideas for stories, using my notes API.

Once an idea matures, it might eventually graduate to be a blog post:

I leave that endpoint open for anyone to GET from. My API Evangelist and Kin Lane blogs all pull their blog posts from here. When a new blog post is added, and tagged for publishing on either kinlane.com or apievangelist.com, it triggers the publishing of the blog posts to the targeted web site.

At that moment in time the blog post is published under the CC-BY Creative Commons license

With each story published, and openly licensed within my own domain, now I can start thinking of other domains, where I might wish to grant a license, allowing them to use a copy. You might find my work at:

These are all lines I've drawn in the sand, as I work to define myself online. Some of these lines will fade over time, as I lose interest in a platform. I know I lost interest in:

And this line is getting pretty faded:

I know Tumblr is big for some folks, but I just never quite got excited about it, and don't really publish there (I stopped when posterous went away).

I really am just drawing lines in the sand--some of tehse lines fade away with time, with others playing varying roles in how I define myself online. Some of them play a strong part...

Twitter does this:

So does Github:

Both of these lines in the sand are very important to my digital self. I have agreements with both of these companies, when it comes to these lines. I pay almost $250.00 to Github, and I pay nothing to Twitter. I am constantly redrawing these lines in the sand, multiple times a day, while also feeling nervous about my relationship with both platforms on a daily basis. Twitter frustrates me more than Github, but I don't trust either of them to really give a shit about me and my lines in the sand on their beach. They could care less if I'm there each day to redraw these lines, or just fade away.

This is why everything in my world begins within my domain. There will always be a portion of my digital self that exists on other beaches, but my goal is to draw as many of the lines, store, and capture the value from my daily exhaust within my domain. If you run your own small business like I do, this is critical. This is how you will control your intellectual property, make your living, and mantain direction over your career, and where you go in life. The more you do this on other people's domain, the less ownership you will have, and the less control over the direction that you go.

This is the conversation we are having within our Indie Ed-Tech, University and Personal API working group, asking questions:

  • What does this look like from the insitutional standpoint?
  • What does this look like through a teachers eyes?
  • How do we we expose students to this way of thinking?

Schools like Brigham Young, University of Oklahoma, and Davidson, who are deploying domains for their students using providers like Reclaim Hosting, are pushing this conversation forward amongst institutional leadership, IT, faculty and students. Teachers and students can go beyond having subdomain or folder on the university domain, and launach their own domains. 

When you talk to CIO Kelly Flanagan (@kelflanagan) from BYU about the future, he wants to publish student information to the students domain. All incoming freshman receive their own domain, storage, and what they need to be digitally literate, something they will take with them when they leave school, and enter the real world -- setting a pretty high bar for the expectations of where the lines in the sand will exist for them.

The World Wide Web looks very different to a digitally literate invidual. When you are equipped to define your own world, and draw the lines in the sand as you see fit, and learn bend that in your favor, you know how to ask the right questions:

  • Is it possible for this to start in my own domain, then publish elsewhere?
  • Does this online service allow me to get my information out, and delete my account?
  • Is that line in the sand acceptable to me? Where are the opportunities for negotiation?
  • This information is important to me, will it be secure if I put it in that location?
  • What are the motivations of organizations I work for, and companies who's services I use?
  • I don't think that content, image, video, or other element reflects me, how do I delete it?
  • I don't think you are trustworthy with my information anymore, please end our relationship!

You become better equipped to ask the questions that you will need to define the version of your digital self, that you want to see. The one your customers will see. The profile(s) that your employers will see. The side of you that your friends and family will see. What you will need to make money from the exhaust generated by your hard work each day. While in college it may be for fun, but in a few years you will have a professional reputation to maintain.

As an individual, company, or institution you want as much control over your online existence as you possibly can. The lines you draw in the sand are important. It is critical that you stop from time to time and take assessment of the lines you've drawn (aka Google yourself). Get rid of old accounts. Clean up dead profiles. You should be also taking every opportunity that you can to make sure you draw lines within your domain, while also applying more thoughtfulness about the lines you draw in other people's domains. 



from http://ift.tt/1q3oxWH

Tuesday, March 29, 2016

The Silence On Your Blog Is Way More Damaging Than Any Spelling Mistake On Mine

I get the whole spectrum of spelling and grammar trolls as the API Evangelist. I get the ones who are nasty, and leave comments about how they'd respect what I say only if only I'd learn to edit. All the way to my favorite ones who submit pull requests with corrections...and everything else in between. 

This post is about the ones in between, another class of spelling / grammar trolls who have their own blog, which isn't near as active as it should be. I always strive for quality editorial processes on API Evangelist, but it is something that after I read a post for the 3rd or 4th time, I have to step back from--good enough. Since I'm also the broke ass evangelist, I can't afford an editor either.

I understand the remaining spelling / grammar mistakes can be a sin, but I'm here today, to talk to you about an even greater sin! Perhaps one of the worst sins of all, in my opinion. Not blogging at all! If nobody hears your voice, your perfect spelling, and grammatical prowess will never matter, let alone your actual opinion.

I am judging all you, each and every day for the meaningful, impactful posts you aren't publishing!! Unacceptable! The silence on your blog is way more damaging than any spelling mistake on mine. ;-)



from http://ift.tt/1pIi02I

Friday, March 25, 2016

It Is Not You Being Evil I Worry About, It Is You Being Greedy And Priveleged

This posted started as a Tweet the other day, but since I'm in a ranty mood today, I pulled it out to help emphasize several other stories I've written today. A pretty standard response I get from folks who have been indoctrinated into the Silicon Valley culture, when I write or say something that is pushing back on VCs, is dismissing the fact that they are using technology for evil purposes.

Aside from this being a pretty shit simple, and broad response to a (hopefully) precise, nuanced argument, I'm not actually worried about 99% of you actually planning on doing evil and sinister things with technology. As with my last post on the unintended consequences of API patents, I don't think you are up to evil things, I just think that on your greed driven quest to be wealthy, and from your privileged position as a white male, you are missing a whole lot of bad shit that is happening along the way.

You see, I am not worried about the intentionally bad shit people do with technology, but I am very worried about the all encompassing need to get VC money for your tech idea, and all the bad shit that will happen along the way, and after you''ve cashed out (or failed) and moved on to your next idea. This is where the damage will occur, and because you enjoy a privileged place in the world, all the fallout you don't see from your vantage point, will be assumed by everyone else.

So please, do not talk about doing evil anymore in your arguments with me, and help me understand that you have done the hard work to understand the lower level, unintended negative fallout that is occurring every day from all y'all's quest for riches, fame, and tech glory.



from http://ift.tt/1UhPBhc

Monday, March 21, 2016

I Predict We Will Be Using HTTP/1.1 451 Unavailable For Legal Reasons A Lot More In Future

photo credit

I'm playing catch up with some of my news and information curation, something that always builds up when I travel. As part of my work I came across the RFC 7725, HTTP/1.1 451 Unavailable For Legal Reasons, providing an HTTP status code for use when access to a resource is denied as a consequence of legal demands.

The RFC states that, "responses using this status code SHOULD include an explanation, in the response body, of the details of the legal demand: the party making it, the applicable legislation or regulation, and what classes of person and resource it applies to"--something I'm wondering if even will be possible in some situations.

Anyways, I wanted to make sure this HTTP status code was filed as part of my overall API Evangelist consciousness. I have the feeling that we will be using HTTP/1.1 451 Unavailable For Legal Reasons a lot more in the future, and want a reference point to link to. Hopefully someone out there will be tracking the number of 451 status codes that emerge, so we can graph over time how many online resources go 451--I do not have the resources to do it.

Seems like it will be a pretty telling metric of the health of this online experiment, we are calling the web.



from http://ift.tt/1pGspN0

Thursday, March 10, 2016

Never Dismiss The Power Of Storytelling

As I'm working my way through my news feeds, aggregated across the blog RSS and Twitter streams of the companies I track on in the API space, as well as the leading organization, and individual tech blogs I subscribe to, I am noticing one of several storytelling memes being tested at any given moment. The tech story du jour is about wearables being applied to managing the older generations of our world (innovation?).

When you monitor as many feeds and Twitter accounts as I do, you start to see these patterns of storytelling emerge, popping up on a handful of mid-size blogs, and then eventually some of the bigger blogs. Wearables for monitoring and managing the old peoples, is just one of the latest, insurance has been another, and on, and on. Sometimes these stories take root, and begin to get retold by folks who think these stories are true, and are put out there to genuinely educated them. 

The industry news and blogs are not telling you how about this API driven device is radically shaking up the insurance because for altruistic reasons, a desire to keep you informed about what is going on. 95% percent of this storytelling across the tech space has precise targets in mind, looking to influence investors, employees, and other key actors in this Broadway musical we are all playing a role in.

Storytelling is central to everything. It is how startups influence investors, and investors influence markets, which influences the enterprise, which influences industry, and government--everyone tells the story they think will result in the outcomes they desire. I do this. I tell stories to influence what I want to see in the space. So does Google. So does AT&T. So does the White House. So do the Koch Brothers. Everything we read, is published to influence us--whether it is fiction, non-fiction, or a combination of the two (aka marketing).

I started API Evangelist in 2010, so that I could tell stories about what people were doing in the API space. I saw that API architects, designers, and developers all sucked at telling stories about what they were doing. I stepped in to fill the gap. Six years later, most folks operating APIs still suck at telling stories, and when I tell them that the most important tool in their toolbox is storytelling, they quite often chuckle, and dismiss me. This is so common, that I regularly have folks who use the dismissal of storytelling to downplay the importance of what I do, and what it is that I am saying.

API Evangelist is just a blog. You are just a single person. The enterprise doesn't listen to stories. Government doesn't care about public stories. I hear these arguments regularly in, as people try to put me in my place, or just lash out because they don't agree with what I just published. That is fine. You don't have to agree with me, but you should never dismiss the power of storytelling. Our reality is just layers of carefully (or not so carefully) crafted stories, influencing us at every turn. 

Bring this back down, most of the criticism thrown at me is true. API Evangelist is just a blog. I am just a single person. Most in the enterprise, and government will not pay attention to what I say, because of the way they see the world. However, this doesn't stop my stories from influencing folks. An average post of mine will be read by a couple hundred people, and never be seen anymore. However 1% of my posts will have a significant impact like the Secret to Amazon's Success: Internal APIs, which enjoys 2K page views a week, almost four years later, and comes up in conversations across the industry, from banking to government, around the globe.

And that story is 98% bullshit! People believe it. People eat it up. People love stories. This is why the tech sector is always moving on to the next great thing, while simultaneously testing out thousands of possible next great things, through storytelling, and see what bait brings in the biggest fish (THE OPPORTUNITY IS THIS BIG!). Stories are everything. Whether its marketing, PR, or the hacker storytelling theater that I perform, it is what is making all of this going around, and is what markets are based upon--never dismiss the power of storytelling.

One irony of all of this, a majority of the folks who dismiss me, and my evangelism fo the storytelling process, are often trying to get me to talk more about what they are working on. Something I'm happy to do, but understanding, distilling down the complexity often present with the world of APIs is hard work, and crafting meaningful stories from all of it is more art, than science. It takes practice. This is why you see so much work published across the API Evangelist network, it is all practice, building up to the one story I tell that gets retold, and retold, and retold--never stop telling stories.



from http://ift.tt/1LfhcMQ

Monday, January 25, 2016

Writing Blog Posts That Increases Page Views, But Minimizes Shares

Often times I feel like I'm swimming upstream in my daily work. My work is not ad-based. It is based upon individual human relationships that I have with actual people. Remember people? Those are the carbon life forms we all have to meet with in the real world,Not the virtual representations we interact with online each day. 

Often times I feel like I am working with a balance of the two. I need the page views to justify the relationships that I establish online, but if I do not enforce them offline--they mean nothing! Everything I do occurs as an online transaction, but the money does not ever actually transact unless the relationships are enforced in an offline environment.

The most value I generate daily, exists beyond the page view, after the view, and at the point of receipt...which is impossible to measure in a digital sense. The only thing I can do i stay true to is the delivery, and not get caught up in measuring the view, which quicly disappears before the receipt is ever acknowledged. This is the digital economy. I just do not have the time to wait, I have to get back to producing, and delivering.

Sure, some other party can step in and care about the view, and struggle to understand the receipt, but this is what record labels, publishers, and other middleman have done through the ages. i do not have time for this. I am a creator. I am the source. I generate the exhaust, and leave to the rest of you to determine the value, and fight over the scraps. 

I am maximizing the view, but minimizing the share. Most that read what I write do not want to share. They want to keep for themselves. This an entirely different market than the PPV and PPC world that is being monetized as we speak. How do we incite, rather than extract and monetize? Inciting is a much more difficult challenge than simply getting rich off the emotion that we invoke in others--for me, I need to everything to actually result in action, rather than just a response. 

As i look through the "numbers", I cannot help but think everyone is looking, but holding back when it comes to the actual sharing--I will something I will explore further in future posts.



from http://ift.tt/1NwGwYE

I Wish I Could Select From My Own Templates When Setting Up Github Pages Project Site

All of my public presence runs as hundreds of separate Github projects. Because content, code, and JSON data drives my world, Github + Github Pages is a great way to run my operations. This approach to running my business, allows me to break up my research projects into small bit-size repositories, grouped into organizations, where I can collaborate with partners, and the public at large using Github's numerous social features--I call this Hacker Storytelling.

When I setup a new project, I first setup the master branch, making it public or private, depending on my goals. Then I always setup a gh-pages branch, to act as the public face for each project. Part of the process is always click next, and next through the default page, and templates part of the Github Pages setup flow. I always just choose the default settings, because once I've checked out the gh-pages branch, I immediately replace with my own template, depending on the type of project it is. 

I have around five separate templates that I use, depending on if it is an API portal, and open data project, or a handful of other variations I use to collaborate around API related projects. I wish Github would let me specify my own templates, allowing me to add one or many template repositories, which could be used to spawn my new gh-pages projects. This would save me a couple of extra steps with the setup of each project. 

I'm not sure how many projects, others Github users are setting up, or maybe I am just special snowflake, but I can't help but think others would find this beneficial. As another more advanced feature, it would be nice to be able to have a reseller layer to this, where I could create account level template galleries, where my clients could then setup new organizations, and repositories, all driven my a master set of templates that I control. 

Just brainstorming the possibilities here. It is what I do. If you happen to stumble across this post Github, it would be a sweet new feature, that I hope others would find valuable too!



from http://ift.tt/1ZNMyLG

Friday, January 22, 2016

Overview Of My Knight Funded Adopta.Agency Project

This is an overview of my Adopta.Agency open data project, which was funded by Knight Foundation in the summer of 2015.

Born Out Of President Obama's Mandate
The Adopta Federal Agency, now shortened to just Adopta.Agency was born out of the presential mandate by Barack Obama, that all federal agencies should go machine readable by defaut, and instead of publishing just PDF versions of information, they should be outputing XML, CSV, JSON, and HTML formats. This mandate is still be realized across the US federal government, and has helped put into motion a great deal of open data work, across all agencies, opening data that impacts almost every business sector today.
Worked On Open Data As Presidential innovation Fellow
I personally worked on the open data effor in the federal government, as a Presidential Innovation Fellow, or simply PIF. I worked on open data inventory efforts at the Department of Veterans Affairs, and saw first hand, the hard work going on in government. A lot of very hardworking folks were focused on meeting the mandate, discovering open inventory assets like veteran hospital locations, and veteran program data. The challenge is not finding data at the VA, it is often the process of cleaning up, betting and making available in simple, machine readable formats, that is true challenge.
It Is Not About Technology, But A Process Blueprint To Apply
While in government I saw that technology only got yo so far, and there the biggest challenes are around just having the resources to make valuable data, available as CSV, JSON, and if possible APIs. The government just doesn't have the resources, or the skills to always make this a reality. Adopta.Agency is design to not just be yet another technological solution, it is designed to be a process blueprint, to help passionate government workers, or the average citizen, take already existing, publicly available open data, and move it forward one or two steps. The goal of Adopta.Agency projects is to simply clan up the data and making available on Github, in CSV and JSON formats, as well as publishing a full API when possible. 
You Can "Adopta" Agency, Project, or Data
Adopta.Agency is focused on empowering anyone to target a government agency, and / or a specific project and data, and help encourage more to be done with the data, bring awarness the fact that the data exists, and what is possible when it is available as simple, machine readable resources, available in a public Github repository. Adopta.Agency isn't the next technological solution, it is a blueprint to help anyone conduct the hard work of moving forward the open data conversation for an agency, project, or individual piece of data.
Using Github Repository, With Pages Making Everything Forkable
Adopta.Agency relies on the social coding platform Github, for much of its functionality. The Aopta.Agency blueprint is available as a forkable Github repository, which allows anyone to take the master blueprint, fork it, and transform into their own open data project, following the Adopta.Agency process, without knowing how to program. Github provides an environment that allows for evolving open data, using some of the same approaches used to push forward open source software development, but focused on making open data more accessible, and usable via a Github repositories.
Single YAML Checklist That Is Easy To Follow
Github uses YAML as a machine readable format for much of its content or data management, and Adopta Agency leverages this, making make the Adopta.Agency blueprint process accessible as the very human friendly data format. Everything within any Adopta.Agency project is editable in each project's central YAML file, allowing you to edit everything from the project details, to links to each of the APIs, and open data files. YAML makes each blueprint machine readable, but in an easy to folllow, single process driven checklist, that anyone can follow without needing to understand how to program or read JSON.
Defining A Clear Objective For Adopta Projects
Each Adopta.Agency project has  its own objective, targeting a single agency, project, or specific set of data. Using the central YAML file, each project owner can edit the title, description, url, tags, and other details that articulate the objective of the project, making the goals as simple, and clear as possible. This process of having to craft a concise statement which describes the project is something many existing open data efforts suffer from, government workers just don't have the time or awarness to craft.
Focus On Cleaning Up Data And Making Available As CSV And JSON
The primary function of any Adopta.Agency project is to target some specific data, clean it up, and make it accessible via Github as CSV and JSON files. This is something many government agencies also do not have the time or expertise to make happen, and could use the help of citizen data activists, who are passionate about the areas of society where open data can make an impact.
Share Data and Content As Public API When It Makes Sense
The frist step of Adopta.Agency projects is around cleaning up open data, and making it available as simple CSV, and JSON, with the second step, focused onmaking these formats available as an interactive API (if it makes sense). While a more advanced component of any project, an API can be easily deployed, without any programming experience-using modern, cloud API management solutions. 
Showcase What Is Being Built On Each Adopta Project
Open data efforts are all about providing actual solutions, and Adopta.Agency is focused on making sure small, meaningful elements like interactive widgets, visualizations, and other tooling are provided. Each project should have a handful of tools that put data made available vai each project to work, helping exand the understanding in the topic area, showcasing the value created, and potential of open data.
Highlight The Team Behind All Of The Work
All of this open data work does not happen without people, and showcasing the open data, and tooling is important, but it is also important to showcase the team behind. The Adopta.Agency blueprint has built in, the ability to showcase tooling built, as well as the people and personalities behind the hard work going on.
Share What Tools Were Used In Each Project
The engine behind the Adopta.Agency blueprint process is a carefully assembled and developed toolbox. The Adopta.Agency toolbox includes Github as the Platform, Github Pages for the free project hosting, Google Spreadsheets as the data store, API Spark for API Deployment from the Google preadsheets, a CSV Converter, a JSON Editor Online, and D3.js for vsualizations from CSV and JSON files. All this can be run for free, unless projects are of a larger scale, or require private repositories for collaboration--if kept public, Adopta.Agencies should not cost project owners any money.
Answer Any Questions Up Front
Each project that uses the Adopta.Agency blueprint, also has a frequently asked question section, helping answer any obvious questions someone might have about a project, while also forcing each project owner to think through common questions. The FAQ section encourages project stewards to regularly add questions, making it any easy resource for users to get up to speed around a project.
Offer Support Through Github Issues
One of the most vital aspecs of using Github, is the usage of the Github issue management features, for establishing a feedback loop around each Adopta.Agency open data project. Think Facebook, for each of the open data projects, encouraging conversation, questions, a valuable feedback that helps move any project forward, making the work a social endeavor.
Tell The Story A A Jekyll Blog
One exteremely important aspect of the Adopta.Agency lifecycle is telling the story of what you do. This content generated as part of this portion of operations provides a rich exhaust that is indexed by search engines, and amplified on social media, helping increase awareness of the open data work. This adds another missing dimension to the open data process, which is missing with many of the existing government efforts. 
I Applied Adopta.Agency To The White House Budget
To develop the base prototype for the Adopta.Agency project, I applied it to something I have very passionate about, the budget for the federal government. I feel pretty strongly, that having the existing spreadsheets for the US budget, available as JSON and CSV will help us better understand how our government works, and will help us tell better stories about spending using visualization, and other tooling. 
Next I applied To The Department of Veterans Affairs
To continue the work I had already been doing at the Department of Veterans Affairs, I wanted to push forward the Adopta.Agency blueprint by applying to very importan, veteran related data. I got to work cleaning up population focused data, healthcare expenditures, insurance programs, loan guarantees, and veteran deaths. I have since hit a significant data set, breaking down VA spending by state, which I will break out into its own project. 
My Partner Did It With My Brothers Keeper Data
With a prototype blueprint hammered out, my partner in crime Audrey watters was able to push the blueprint forward some more, by applying to My Brothers Keeper data, which provides valuable data on men & boys of color. She is still working through the numerous data sets available, and telling the story along the way, which has resulted in some very interesting conversations around how this work can be expanded, in different directions, like collecting the same data for women & girls of color, and possibly other ethnic groups.
Attracted Someone Passionate About Fisheries
As I was telling the story about building the Adopta.Agency blueprint, an individual contacted me, to see if we could possibly apply Adopta.Agency to the area of commercial fisheries. To support the request, I have begun a project that is focused on NOAA fishery industry data, and begun pulling available data sets, and started the process of cleaning up in Google Sheets--work that will continue. 
What Were The Challenges?
During the six months of work there were numerous challenges identified, beginning with stigma around Github being a very difficult platform for non-developers to use, and concerns around the technical skills needed to work with JSON, and APIs. Additional concerns around interests in making this type of change in government, and whether the average citizen has the passion to make this work. Overall people we spoke with, felt it was a viable approach, something they could overcome challenges, if there were proper support across all projects, and the Adopta.Agency community. Overall there will have to be a certain amount of trust established between data stewards, the public, and government agencies involved.
What Is Next For Adopta.Agency?
We felt Adopta.Agency prototype was a success, and will be continue to work on projects, as well as work to amplify the approach to opening up data. While a lot of work is involved with each project, the simplicity, and journey users experience along the way really resonated with potential data stewards and project owners.  We have a number of people who engaged in conversations throughout the prototype work, and will be engaging these groups, to continue with the momentum we were able to establish within the last six months.
Looking At Applying To International Aid Work
Our work on Adopta.Agency has also opened up a conversation with Sonjara, Inc, around a handful of additional potential grants, where we could apply the open data and API blueprint in the area of foreign aid, and government spending at the international level. Our two groups have had initial conversations, and will be targeting specific funding opportunities to help apply the existing work, within this much needed area of government open data. 
Opening Up The Value Information At Privacy Rights Clearinghouse
Another conversation that was opened up as part of the Adopta.Agency project, was with the Privacy Rights Cleainghouse, which is a steward of privacy rights educational content, and security breach data since 2005. The organization is very interested in making more of its rich data and content availble via APis, allowing it to enrich other web, and mobile applications. This would be an exciting area to begin moving Adopta.Agency beyond just government data, and help non-profits like  the Privacy Rights Clearinghouse.
Adopta.Agency For The 2016 Presidential Election
My primary supporter 3Scale, who supports my regular API and open data work since I was a PIF, has epxressed interest in sponsoring the Adopta.Agency blueprint to be applied to the 2016 presidential election. The objective would be to target open data, that would prove valuable to journalists, bloggers, analysts, and other active participants in the 2016 election. We are in initial discussion about how the process can be applied, and what funding is needed to make it a success. 
Improving Upon The API Evangelist Portal Template
Some of the work included in the Adopta.Agency blueprint has bee included in the next version of an existing open API portal template hosted on Github. With the open licensing of the Adopta.Agency work, I am able to easily integrate into any existing open or commercial project that I work on, which provides a forkable, easily deployable developer API portal for launching in support of open API efforts. The Adopta.Agency blueprint has provided the work, some much more non-developer friendly ways of handing API portal operations, which can be applied across open API efforts across numerous business secotrs. 
Target More Agencies
With Adopta.AGency, we will continue targeting more government agencies. We have a list of interested individuals with passions for opening up government agencies, ranging from NASA to Department of Justice policing data. We have a short list of over 25 federal government agencies to target with Adopta.Agency projects, the only limitation is human, and financial resources. 
Target More Data
Along with the federal agencies we will be targeting, some of the conversations our Adopta.Agency work has opened up, push the model beyond just the federal government. Projects focusing on election data could span both public, and private sector data. The Privacy Rights Clearinghouse will do the same, pushing us to make more data available, for consumption across all layers of the economy. 
Target More Grant Funding
Our conversations with Sonjara, Inc, and Privacy Rights Clearihouse, are looking for specific grant funded projects where we can apply the process developed as part of Adopta.Agency work. In 2016, we are looking to target up to five new grant opportunities, seeking to move the entire project forward, as well as potentially spawn individual open data projects, expanding the Adopta.Agency community.
Evolve the Blueprint's Reach
While pushing forward the Adopta.Agency blueprint will occur in 2016, the most significant portion of the project's evolution will involve reaching more individuals, building more relationships, encouraging more conversations, and yes, opening up more open data across the government, and other sectors of the economy. In 2016, our goal will be to focus on evolving the reach of Adopta.Agency, by continue to apply the blueprint to new projects, and working with other passionate individuals to do the same, evolving the blueprint's reach, and its impact. 

This overview is driving my presentation at Knight Foundation demo days, to wrap up the grant cycle, but the project will be ongoing, as this was just the seed funding needed to make it a reality.



from http://ift.tt/1QpucQt