Saturday, April 23, 2016

The Potential Of Jekyll As A Static Data Engine

I am an old database guy. I got my first job working on databases in COBOl in 1987. I have worked with almost every database platform out there, and I love data. I remember writing my own indexes, relationships, and other things we take for granted now. I remember being religious and dogmatic about the platforms I used, which included FoxPro and eventually Microsoft SQL Server. I have grown out of any dogma for any platform, tool, or specific approach, but I continue to manage quite a bit of data for my personal and professional pleasure.

Data is core to API Evangelist, and my API industry research. Even though I still have an Amazon RDS MysQL core to my data operations, this centralized approach is slowly being cannibalized by a more distributed, static, JSON and YAML, and Jekyll driven vision. Increasingly my data is living in the _data folder of each static project repo, being hosted on Github Pages, as well as some specialized Linux Jekyll EC2 deployments I am working with. I do not think this will ever be something that entirely replaces my old, more centralized approach, but it has grown to be a significant portion of my operations.

There are many issues with this approach, keeping things up to date, providing a comprehensive search, and other things are still challenges for me. However, the static nature of both the UI, and the data layer for this projects is proving to have benefits that far outweigh the challenges. The openness and availability of the data and content that drives all my research, project sites, and tooling is refreshing for me. I'm also enjoying having hundreds of very static, cached, distributed websites, and tools that don't change -- unless I direct them to with a publish or a push.

One area I am still exploring in all of this, is the pros / cons of delivering UI experiences with pure JavaScript which consumes the JSON, a more heavy Liquid experience which also consumers the JSON, or take it to new levels with a combination of the two. Some of the stuff I'm doing with an APIs.json and OpenAPI Spec driven documentation which uses Liquid, and JavaScript feels liberating as an approach to delivering developer experiences (DX). If you haven't played with _data folder + Liquid in Jekyll, I highly recommend it--it is a different game.

Anyways, I haven't had much time to talk about this shift in my data management approach, so I wanted to capture some of my current thoughts about the potential of Jekyll as a static data engine--we will see where it goes for me.



from http://ift.tt/1NHVVXZ

Tuesday, April 19, 2016

Upstream Transparency Is A Serious Concern But Downstream Not So Much

My ongoing stories around how APIs can help make business, educational, healthcare, government, and the other transactions that we are increasingly seeing move online, be more transparent--enjoy a regular amount of snickers, subtweets, and laughs from folks in the space. Whether it is about opening up access to our social media dtaa, all the way up to how our home or student loans are hammered out, to how international trade agreements are crafted--I feel APIs have a role to play when opening things up (yeah I know I'm biased). 

With that said, APIs are NOT the answer in all scenarios. I do not see them as the aboslute solution, but they do offer promise when it comes to giving individuals a voice in how social media, financial, healthcare, and other data and content is used, or not used. A lot of people who are part of the startup machine love to laugh at me, pointing out how uninformed I am about how things work. We have to keep things private, and in the hands of those who are in the know--this is how business works, and everything would come to a screeching halt if we forced everything to open up.

While there is a wide variety of motivations behind why people poke at me about my persepective, I am noticing one important thing. They disagree with me when I point out things that are downstream from them, but they agree with me when it is upstream from them. Meaning if I focus on federal agencies like the IRS, or possibly banking on Wall Street, and other regulatory and legal influences -- they are all about transparency. If it is downstream from them with their existing customers, or would be customers for their future startups --hell no, transparency will always hurt the things. Just leave it to the smart, cool kids to make the important decisions, the average person shouldn't worry there little head about it..

This is privilege. You are unable to see people below the position you enjoy in society. You have an overinflated sense of what you are capable of, and that you will always do right by everyone  You possess a attitude that all your customers, and users who will be using your platform and tooling should just be thankful that you are so smart, and built this amazing platform for everyone to use. Please do not bother us elite with concerns privacy, security, and well being. We know best! They have things under control. However those bankers building the banking platform we depend on, people runnign those higher educational institutions, or those nasty regulators who craft the platform we are required to report business operations -- they can't be trusted! #TRANSPARENCY

Ok. Done ranting. Let me close with saying that transparency does not equal good. Transparency can be very bad, and I am not prescribing it in all situations. I am just pushing back on those who push back on me when I ask for transparency in the platforms in which I depend on for my business. Please do not try and disarm me with absolutes. I'm just asking for transparency in the business and political dealings that directly impact me as an individual, and my business. While also suggesting that more tech folks work to understand that there is a downstream impact from their world, they might not see, because of the priveleged position you enjoy, so please thinking a little more deeply as you race to get what is your.



from http://ift.tt/1SRuU5B

Saturday, April 16, 2016

I Severely Overestimated Peoples Desire To Deliver Real Solutions When It Came To APIs

I have made a lot of mistakes, and made plenty of totally incorrect assumptions about the API space in my last five years trying to help seize this narrow window of opportunity I feel we have, when it comes to defining our digital self--using APIs. As business leaders, IT, and developer practitioners, APIs provide us a big opportunity to understand our digital assets, better define them, and open up access to them.

This vision of what we can do when it comes to APIs is a pretty powerful one to consider, but is also something that quickly morphs, distorts, and becomes something entirely different when humans are involved--sometimes with positive outcomes, but increasingly they are negative. I'm feeling like I severely overestimated people's willingness to truly want to deliver solutions using APIs. Whether it was the API consumer, provider, or the service provider delivering their warez to the space -- the majority of folks seem most interested in their own selfish need.

While I often work to paint an optimistic view of all actors involved in the production that is the API space, here is how things are breaking out in my mind:

  • API Providers - The average company, and business really does not have any interest in properly getting to know, and do the hard work necessary to define its digital self, and is perfectly happy just buying into the next wave of techno-solutionism, never really ever actually make any true change.
  • API Consumers - Really have no interest in getting to know the API provider, are looking to get something for free, and will to sign up for multiple accounts, and other behavior that really is about extracting as much value as they can, and give nothing in return.
  • API Service Providers - Could care less of the quality of API implementations, as long as API providers are using their solutions, ideally with a 2-3 contract, so we can meet our numbers--we have the techno-solutions that you need to not make any change.
  • Investors - The longer I spend in the space, the more the strings of the investors become evident. These strings are almost always at odds with anything that truly make API work, like trust in a provider, transparent business models, and a sensible road map.

Y'all deserve each other in my opinion. The problem is you are all gumming up any forward motion we were actually enjoying. I'm hearing business folk talk about APIs like they are the next dashboard and analytics, a sort of catch-all solution to the problem of the day. I'm watching providers, service providers, and investors chase the money, and not actually investing in what is needed to actually do APIs right. 

In short, there is a lot of money to be made, and to be spent when it comes to APIs -- that will do absolutely nothing to provide your company with real world solutions. You have to make sure an invest in the right people within your organization. People who will own the API vision, and help build internal capacity to help your organization understand its digital self, and deliver API driven solutions correctly. You do need services, and tooling to make this happen, but the core of it will be your people doing the hard work to define your core business value, while mapping out the digital version of your organizational, the bits and bytes that make this happen, and the other human stakeholders that are involved. 

I am just venting, as I continue see waves of companies walking around with their heads cut off talking API, and spending money on API, and service providers lining up to sell meaningless API solutions to them. I am also seeing investors guiding both API provider, and API service provider in ways that have nothing to do with what is needed to deliver a solution that will make a real impact -- they are just looking to meet the numbers they've set.

As usual, this kind of shit doesn't stop me. I'll keep doing API for myself, and for the small handful of people who want to actually do a better job of defining their digital self, and own the creation, storage, orchestration, publishing, and syndication of that self. While y'all are all over there finding the best API driven way to lock up people's digital self, and extract as much value as you can for you and your partners, I'll be over answer questions for people who want to take control over their own individual professional, and business digital presence.



from http://ift.tt/1TbP2Co

Wednesday, April 13, 2016

It Will Be All About Doing Bots, Selling Picks & Shovels, Providing Platforms, Or The Anti-Bot Technology

As I monitor the wacky world of bots that is unfolding, I am beginning to see some pretty distinct pools of bot focused implementations, which tell me two things: 1) There will be a lot of activity in this space in the coming year & 2) It is going to be a shit show helluva ride. 

If you haven't picked up on it yet, everyone is talking about bots (even me). Its the new opportunity for startups, investors, and consumers! Whether its Twitter, Slack, or any of your other preferred messaging channels, bots are invading. If you believe what you read on the Interwebz, the bot evolution is upon us--let's all get to work.

First, you can build a bot. This is by far the biggest opportunity around, by developing your own sentient being, that can deliver anyone sports stats and financial tips in their Slack channel. Every techie is busy sketching out their bot design, and firing up their welder in the workshop. Of course I am working on my own API Evangelist bot to help do my evil bidding, because I am easily influenced by trends like this.

If building bots isn't in your wheelhouse, then you can always sells the picks & shovels to the bot builders. These will be the API driven dictionaries, file conversion, image filtering, SMS sending, flight lookup, hotel booking, and other picks and shovels that the bot builders will be needing to do what they do. You have a database of election data? Maybe a bunch of real estate and census data? Get to work selling the picks and shovels!

If you aren't building bots, or equipping the people that are, you are probably the Twitters, Slacks, Microsofts, and Facebooks of the world who are looking to be the platforms in which all the bots will operate, and assault the humans. Every platform that has an audience in 2016, will have a bot enablement layer by 2018, or you will go out of business by 2020. Not really, but damn, that sounded so good, I'm going to keep it. 

The best part of diving deeper into the world of bots, is that you don't have to look to far before you realize this is nothing new. There have been chatbots, searchbots, scrapebots, and formbots for many many years. Which really completes the circle of life if you realize there is already a thriving market for anti-bot technology, as demonstrated in this email I received the other day:

I wanted to follow up with you and see if you might benefit from an easy and accurate way to protect your website from bad bots, API abuse and fraud.

With Distil Networks, you can put an immediate stop to competitive data mining, price scraping, transaction fraud, account hijacking, API abuse, downtime, spam and click fraud. 

Website security can be a bit overwhelming, especially with bot operators becoming more sophisticated.

We've put together eight best practices for fighting off bot intrusions to give you a solid foundation and complete understanding when evaluating bot mitigation vendors.

Bots have been delivering value, and wreaking havoc for some time now. With this latest support from platforms, new attention from VC's, and a fresh wave of bot builders, and their pick and shovel merchants, I think we will see bots reach new heights. While I would love for these heights to be a positive thing, I'm guessing with the amount of money being thrown at this, it will most likely be a cringe worthy shit show. 

Along with voice enablement, I will be tracking on the world of bots, partly because I can't help myself (very little self control), but mostly because I think the possibilities are significantly increased with the number of API driven resources that are available in 2106. The potential for more interesting, and useful bot implementations, if done right, is pretty huge when you consider the wealth of high value data, content, and algorithmic resources available available to the average developer. 

I am working to remain optimistic, but will not be holding my breathe, or expecting the best in all of this.



from http://ift.tt/1Sbw6DG

Monday, April 4, 2016

I Am Building More Single Repo Apps

Github has been playing a central role in my existence as the API Evangelist for a number of years now. I began migrating all my side project to use Github Pages back in January of 2013, something I continued to evolve throughout the year, resulting in an approach to running applications entirely on Github Pages and uses the Github master repo as the backend

This approach to building using Github repos, Github Pages is not a design approach you want to use in all use cases, but I'm enjoying using it for some very simple, yet forkable applications:

  • CSV Converter - A CSV to JSON, and back again converter, with ability to store converted data in the master repo via the Github API.
  • RSS Aggregator - A multi-RSS feed aggregator, which pulls RSS feeds of URLs in a central JSON file, then writes the aggregate posts to another JSON file -- with RSS feed of aggregate.

These are just two examples of simple client-side solutions I'm developing. I'm further solidifying my approach to what I was going to just label simply as Single Page Apps (SPA), but really they are more of a Single Repo App (SRA) -- because we need another fucking acronym. Seriously though, I'm going to dial this in, and use to develop an army of apps, that do one thing and does it well.

I like this approach because it allows me to keep the apps I develop small, and easy to design, develop, operate, and then also easily thrown away and /or replaced. Or not throw away, and just let them live on, with minimal maintenance. Part of what makes this possible is that these apps run 100% on Github, with the following architecture:

  • Github Master Branch - This acts as the starting point for every app, using Github as its OS, but also the master branch as its central public or private data store. 
  • Github Pages Branch - This acts as the UI for each application, providing a public, SSL enabled URL for engaging with each application that is made available.
  • Jekyll - This provides content and data management system for the application, allowing for the app to be published, using Markdown, Markup, with a central YAML data store. 
  • YAML / JSON / CSV Data Store - By using Jekyll, you immediately get an _data folder which turns any YAML, CSV, and JSON file into a data store for each application.
  • Liquid - Jekyll employs Liquid for rendering data and content to each site page, allowing for the publishing of any data you have available in the _data folder.
  • Github API - The Github API provides access to all the moving parts of each app that is developed in this way, allowing for programmable control over all architecture.
  • Github.js - A Github API JavaScript client which allows you to enage with the Githbu API within any application using Github Oauth and personal tokens. 
  • Github Oauth - Each app leverages the existing Github authentication infrastructure for managing user profiles, as well as authentication needed to use API for reading and writing of content or data.
  • Github Issues - Each app uses the Github issue management as its central support and communication channel, allowing for engagement with key stakeholders, as well as end-users.

Each application runs 100% on Github, and works to minimize its dependencies on external resources. If it does work with outside elements, it uses RSS, an API, or other open approach to operate. I'm seeing more API providers also leverage Github authentication, when they have API management providers like 3Scale, making authenticated API access a reality, as well as using resources from publicly available APIs.

What I like most about this approach to developing apps, is that they can be forked and run by anyone who uses Github. I watched my GF Audrey Watters fork the RSS Aggregator in about 5 minutes with just a couple of hiccups, which I will smooth out. The goal is to make them easily forkable, minimizing its dependencies, and empowering any users to put to work to solve a single problem they have. (aka convert CSV to JSON, aggregate RSS feeds).

Anytime data or content needs to be written to the underlying data store, you just have to have an OAuth token or personal token issued from a Github account who has permission to the Github repo in which an app operates in. Github makes it easy to generate and manage these within your Github account, but I also use OAuth.io to help streamline this when needed. Ultimately, I try to minimize the dependencies when I can. I want to get better at minimizing all external dependencies, and being more clear about which ones end up needing built in.

I have a whole list of Single Repo Apps I'd like to see become a reality. Now that I'm seeing some agility in my infrastructure after going API first for all my Kin Lane and API Evangelist operations, I'm able to focus more energy on extending this way of life to how I build out tools and apps. This is why I'm formalizing my approach, drawing a line in the sand, and making sure every new UI, tool, or app I build from here forward, is developed in this way. Eventually I'd like every aspect of my UI to operate as Single Repo App, providing me (and anyone else) with a lego box of UI goodness.



from http://ift.tt/1RAJP82