Wednesday, October 19, 2016

Drones Are A Reflection Of Wider Tech Space For Me

I love thinking about and playing with my drones, but probably not for the reasons you might think. Drones for me are just a reflection of the wider tech space for me and filled with endless opportunities for good and bad examples fo how tech can be put to use. One example from the real world today, that I think reflects the widening gap between what technology is promised to do, and what it can really do, is from the Google / Chipotle drone delivery story.

In short, the drones do not bring a burrito to you in any convenient, or expected way. It is all staged to provide marketing to Google and Chipotle, hype up the world of drones, and NOT about actually having drone technology making our lives easier. It is all technology theater and keeping you believing that these companies are innovating, and technology is making your life better.

I find it tough to be a critic in the tech space when this type of behavior going on. There is an overabundance of individual belief in the power of technology, as well as organized storytelling and marketing that takes this to almost religious levels. If you start questioning this reality, there are plenty of folks who love to jump on. Whether you are questioning the reality around the self-driving car, artificial intelligence, the blockchain, or that you'll have burrito and weed delivery in your neighborhood next week, the system is designed to shut you down.

A controlled drone experience on campus, with a wave of positive press ahead of it, and almost no press about the outcomes, is pretty much what about 90% of the tech space is up to these days. Invest heavily in the spectacle, the promises, and the press campaigns, and be as defensive as you can when it comes the outcomes. Then there is a lot of money to be made on the upside, and the downside, if you position yourself in the right ways--there is almost nothing about technology, and everything to do with making money. 

I am not saying there will not be drone deliveries in the future. I am saying that there will be a million bullshit stories about drone deliveries, investments made and lost, tech bloggers and readers who eat up every wave of shit thrown at them before we ever see drone deliveries. I would say that it is more likely that any opportunity for drone deliveries at the consumer level gets blown to pieces by drones dropping pizzas on your roof, concerns around privacy, and generally bad behavior from drone operators, before it is ever technically feasible.

Now take drone, and replace it with artificial intelligence, self-driving car, bots, and on, and on... #FUTURE


Do Not Blog Or Generate Content < Tell Your Story

I scan a lot of blog posts as I work to monitor the API space. I subscribe to well over 1500 blogs at last count and read the titles of thousands of posts, and the full content for hundreds of posts each week. I see it all. Most of what I see is just content. Those 250, 500, or 750 word posts that are so easy to craft, with an unlimited number of domains willing to let you vomit up for free, in exchange for making a name, or sometimes willing to pay you $10, $15, or $25--it is easy, they are just words!

I can tell that most of it is written to meet some perceived notion of the minimum viable content needed to play in some SEO game. Most of it written by someone who barely cares about the products and services they are paid to write about. I sift through endless amounts of this stuff looking for the stories. In 2016, the return on this investment is worth it. I love it when I do find a story written by someone who cares about a product, a service, the technology behind and the customers that they are in service of.

When I find these people telling stories in the space, I tend to follow them around online. The saddest thing I witness on a regular basis is when these folks go work for the enterprise, where storytelling isn't encouraged. If I catch folks telling stories I tend to follow them around and harass and encourage them to keep telling stories, no matter what environment they find themselves working in. Who knows, it might encourage them to keep telling stories, even if it is not on a regular basis.

I am always super confused when folks tell me that stories do not matter and that nobody pays attention to my stories. Everything is about stories. The stories startups tell. The stories VCs tell. The stories we tell to each other. The problem is that we all seem to grow up and stop believing, and subscribe to the notion that we should be blogging and generating content, and forget how to all tell the stories that matter, the stories that people will listen to, remember, and tell to others. 


Wednesday, October 12, 2016

What I Run Synthetic Content (#DesignFiction) Through My Network

Some people are really confused by the alternative editions of my sites. If you hadn't noticed the links in the navigation for each of my sites, there is an and an blogs to compliment the main editions. What's the difference? Well, the alternative editions are fiction, and the primary editions are all non-fiction.

I had published a story the other week about running synthetic data and content through your APIs and my partner in crime expressed her sadness that this wasn't about the alternative side of my world. This prompted me to think more about why I am increasingly running "synthetic content" aka #DesignFiction through my platform on a regular basis.

  • Distraction - It is a real-time distraction for me as I'm spending hours monitoring the real world of APIs. Thinking about fictional concepts, that are closely aligned with the regular work I am doing, helps me stay fresh, creative, and reduce burnout--allowing me to be more efficient in my regular non-fiction writing.
  • Mind Expanding - The more I write, the easier it is for me to write on a regular basis. I find that my writing was suffering from just focusing on a single topic. I am able to take more diverse takes on all of my work, have a diverse set of ideas to work from, and just craft more stories since my expansion into the fictional world.
  • Out of the Box - Beyond expanding my mind, I find writing fiction alongside my regular industry analysis often puts me completely out of the box. When it comes to monitoring the API space I'm usually focusing on what people are already doing, with the occasional filling in the gaps--when I'm writing non-fiction there are no boundaries, I can talk about ANYTHING!
  • Startup Release - I can write about my ideas for startups like they exist, and explore the ideas like I was actually doing the work. The best part is that I do not actually have to do them. I can put the ideas out there, exercise the muscles provides seeds for other people's startups, but don't actually have to own the shitty side of actually doing a startup.
  • Law Enforcement - When I research and write my non-fiction it allows me to explore topics and concepts that normally might get law enforcement to take another look at me. In the current online climate. this can be a problem, where I can easily point to my fictional writing as the reason for my strange web searches and social activity.

These are just a handful of the benefits I'm seeing from running synthetic content through my network, alongside the regular work I do each day. Right now I am producing about 15% fictional work, and 85% non-fiction storytelling and analysis. My goal is to reach a 50/50 balance in my writing, where I am spending equal time exploring design fiction scenarios for every topic and industry I'm researching and providing analysis on.

Some folks have expressed concerns about there being confusion between my fiction and my non-fiction, but I think this already exists online from the promised made by startups regarding technology, all the way to the current cybersecurity environment being defined online. It can be difficult to tell fact from fiction--at least I use the #DesignFiction hashtag in my titles! The fuzziness between fact and fiction online is one of the reasons I think that #DesignFiction is so important, allowing us to tell stories of what might be, or could be, as a result of all this technology we are unleashing on the world.


Tuesday, October 11, 2016

I Did Not Write Everything That I Tweet Out

One thing I notice regularly in my storytelling and sharing in the API space is how many people don't really notice the authorship behind many of the stories floating around. I often see a retweet of one my tweets sharing out a story I have read, where the person references me as the author of it when it's pretty clear that it isn't API Evangelist if you click on the link.

There are a couple things at play here I think. The first layer of folks making this mistake is derived from the fact that people rarely actually read what they Tweet out. Many times folks are just retweeting a title that resonated with them, from a Twitter account they are familiar with. I understand that folks are busy, but you really should be reading things before you share. #JustSayn

Nexxt I think another layer of all of this is that even when people do read a post, they tend to not always see the author behind. I found that I didn't always notice the author, and learn their name before I began blogging regularly. If you aren't authoring content, I don't think you recognize authors work. It is one of those subtle things I think we can take for granted as we make our way around the Internet each day.

This is one of those things I won't be policing, but felt I should recognize, and help folks realize that I did not write everything that I tweet out.


Google Spreadsheet To YAML On Jekyll

I am building out a number of micro tools on Github lately, and since I'm using a lot of the same features across many different projects, before I implement for any specific solution, I am making sure I develop it as a standalone component. I'm trying to encourage a much more modular approach to how I develop API-centric micro tools, building a whole toolbox of reusable components that I can use in my work and something that I can evolve over time, independently of each project.

One of the modules I developed this last weekend was a Google Spreadsheet to YAML on Jekyll solution. This solution is meant to take any Google Spreadsheet that has been made public, pull it via a simple JavaScript that runs on a Jekyll website running on Github Pages. Once it pulls the JSON from Google it converts it to YAML, and if a valid Github OAuth Token is present, it will save the YAML to the _data folder in the Github repository. 

I am using this component in a couple different websites and micro tools that I'm developing. Some of these projects involve a non-developer maintaining data and content via a Google Spreadsheet, then triggering the update of the Jekyll website that presents the data or content via website and application. I am increasingly storing machine-readable data and content as YAML in the _data store of various Github hosted research projects, visualizations, and other applications--so opening up this work to be easily edited via a Google Spreadsheet is an important piece of the puzzle for me.

As with each component, I am developing, you can find the code for this in the Github repository, and leave any feedback as an issue for the work.


Friday, October 7, 2016

An Air Gap Between Me And The Online World

After coming back from the woods I have worked hard to put in place, and maintain what I'm calling an air gap, between me and the online world. It's basically putting distance between me and what happens in the online world, giving me room to breathe, increasing my overall productivity, and leaving me with more time to be creative.

After I checked out this summer I had to uninstall all unnecessary applications from my iPhone, iPad, and Macbook. When I had 3G network or wireless Internet, it was always crap, and I needed to keep communication channels open for only the most important of things. After I came back to work I was very cautious and thoughtful about which channels I turned back on, and let into my life once again. The ones I did, I make sure there is an air gap in place, keeping me in control.

Whether it's Google and Medium analytics, or my email inbox, the Facebooks or Twitters and the numerous Slack channels I'm on--they are all demanding my attention. An air gap isn't a fix-all solution, but it does give me the space to focus on what matters and get the important things done. I don't get caught up in every demand for my attention, and I don't get sucked into unhealthy conversations and toxic battles like I used to. It's not that I don't engage anymore, I am engaging even more than before, I am just being much more thoughtful about where and how I engage. 

I'm writing about it because I want to get better at talking about it, and making sure that I keep an air gap in place. The more I talk about it, the more I'm aware that it is in place. The more I remember not to have Facebook, Twitter, and Slack open at all times. The more I remember not to respond to any topic, without some deep thought and writing on the subject. I'm not the fastest responder to email, but that is ok. The critical conversations still occur, and the people that matter get through. The rest will wait.

An air gap isn't the perfect solution, but it does give me the space I need to do my research consistently, and maintain a positive outlook on what is going on in my online world.


My Workbench Blogging

I have my own style to blogging that I call workbench blogging. While I do work to edit and polish the stories I publish regularly across my blogs, it is more important to me that I'm producing content alongside my daily research, than it is to be precise in its delivery. Think of my blogs as my workbench, and the stories you read each day are the notes about what I am working on in my workshop each day.

As I'm working, I'm crafting stories to help me think through what I am working on. I act like I'm having a conversation with myself, and with you, my readers, to help me evolve and polish my approach. At the same time, I'm generating content that can be discovered via search and social media, helping immediately make my work accessible to others. I also use my own blog as a discovery mechanism, helping me remember specific companies, services, tools, and other parts of my research for use in future work.

This approach to blogging is not for everyone. I find it rewarding. I get to work through my ideas and research while sharing with others. I find it is an easy way to create a lot of content when you are this transparent. Almost everything I do as API evangelist becomes content, definitions, and code that can be reused--this is why it all runs on Github. My entire workbench is accessible to the public, and my ideas are right out in the open, allowing you to reuse, while also allowing me to make a living and keep doing what I do.

Thanks for taking the time to stop and read my workbench blog.


Thursday, October 6, 2016

Many Dimension Of How Net Neutrality Can Be Interpreted And Used

A European Union agency has said that mobile network's Three's plans to offer ad-blockers would violate net neutrality, which I think is the perfect example of how laws trying to protect the virtue of the Internet will ultimately play out. I'm not saying that we shouldn't try to have these laws in place, it is more a nod to the dark creativity of capitalism to find a way around the containment powers of laws and regulation.

I am pretty sure that the experiment we know as the web is over as we knew it. Relevant to this story, I am pretty confident the lust for advertising has been one of the main reasons the web is so fucked up. There is just too much money in the game now. The stakes are high, and people are too greedy to go back to the way things were. Sadly, thinking about loopholes in laws, and understanding how to twist and bend good laws for those in power is how some smart people enjoy spending their time.

The social walled gardens we tend like Facebook, and the aggressive nature of the public social gardens we loiter, will continue to drive us to the world where we ask for net neutrality to be broken. We will demand a clean, sanitized, and ultimately the corporate vision of the web because it is safe and more secure than the alternative. While I am fascinated by the dimensions of how net neutrality can be interpreted and put to use, I'm saddened that we couldn't make the web work as it was intended by its creators.


Wednesday, October 5, 2016

Technology Carnival Barker Prediction Market Will Be Worth 2 Trillion By 2021

One of the themes that really stand out in my monitoring of the API space lately, is the quantity of information that is flowing around out there trying to convince all of us about what the next thing will be. When you are down in the weeds these things tend to flow by and do not mean much, and some even seem like they might be a reality. However, after you step away for a bit , and then come back, you really begin to notice how much is dedicated to shaping our reality across the digital space.

It feels like I'm walking through a real-time digital carnival with barkers coming at me every couple of blog posts and tweet. Believe in our DevOps solution! Come see my analytics dashboard here. IoT in the home is what you should be thinking about. Are you investing in VR? Blockchain will do it all! It is interesting to watch the themes that have legs, and the ones which are one night stands. Some of these are interesting carnival ride but most are barely a dime show attraction.

If I cared more, I'd be tracking on the claims being made, and associate them with a company, and even individual--keeping a score card for claims being, and what kind of track record people actually have (This will go down on your permanent record). Except I have better things to be doing and have to constantly work to stay focused on what truly matters--even amongst all this noise. I've never been one to make predictions like a real, pants wearing industry analyst, but think that the technology carnival barker prediction market will be worth 2 trillion by 2021. 


Activated Two Factor Authentication With All My Critical Accounts

As I work to maintain my online presence, I am always looking for ways to keep my presence, data, and content-protected. My latest crusade is focused on two-factor authentication. While I did have two-factor authentication enabled for Google, I did not have it enabled for Github, AWS, and Apple. I am not sure why I hadn't, probably just a time thing, but now they are all activated.

I'm thankful that AWS, Github, and Google all use the Google Authenticator app which centralizes my management of the codes required to validate I am who I am. With all the hacks going on, specifically the most recent one from Yahoo, I am stoked to be using 1Password to manage all of my accounts, as well as employing two-factor authentication wherever it is available--especially on the accounts that are most import important to me.

If you aren't familiar with two-factor authentication it is a secondary way for platforms to validate who you are when your password is being changed, or your account is being accessed. Platforms can validate you with SMS or via the Google Authenticator app, but recently SMS has been deemed insecure--so try to rely on the authenticator solution when possible. If a service you depend on doesn't use two-factor, make sure and let them know it is important to you--there is even a handy service that will help you do this.

In the current online environment, we need all the protection we can get. Two-factor is currently one of the most important ways we can defend the online services we depend on. Make sure you active it on all your critical accounts--I recommend starting with your primary go-to locations like Apple or Google.


Tuesday, September 27, 2016

My Client Side YAML Editor Running 100% On Github Pages

I am working on a base template which acts as a seed for all of my micro tools which run 100% on Github. I have a number of little micro tools I want to build, and I needed a series of core building blocks that I could fork and reuse across all my tools. I produced this base template, and next up was creating a simple YAML editor, that would allow me to edit the YAML file in the _data folder for any Jekyll site hosted on Github Pages.

The objective here is to provide little applications that use the native functionality of Jekyll and Liquid for displaying static, data-driven applications, and when I need to write data back to the YAML files I use JavaScript and the Github API. The critical piece is the authentication with the Github API in a 100% client-side environment (ie. Github Pages). This is something I've used before in the past, and my own Github OAuth proxy, but for these projects I want them to be forkable, and all you need is a Github personal token to access (which all Github accounts have) to access.

YAML In The _data Folder In Jekyll Sites
When you put YAML into the _data folder for any Jekyll driven site hosted on Github some interesting things become possible. Jekyll takes the YAML and loads it up as an object you can easily reference using Liquid Markup. It all makes Jekyll perfect for building little data-driven micro tools, with the YAML as a core. If you need JSON, XML, Atom, or other representations, you can easily publish pages that output in these formats.

This YAML is accessible in the _data folder for the Github repository that houses this site. I just want to provide a simple Github Gists for reference in this story. This YAML will be the driver of static, as well as dynamic content and data used across this prototype micro tool.

Static Listing of YAML data Using Liquid
Next up is to display the YAML data available in the _data folder and render it for viewing. Using Liquid I am able to dynamically generate and HTML listing of all the data in the YAML, acting as the static listing of the products that I have in my data store. Here is the Liquid that dumps all the products in the YAML file:

Liquid does have a learning curve, but once you get the hang of it, and have some base templates developed--it gets easier. I've been able to recreate anything that I would accomplish dynamically with PHP and a MySQL database, but using Liquid and YAML data stores. 

Editing YAML Files Client Side With Github API
I have more YAML data store, and a basic static listing of the data store, now I want to edit it. Using Github.js and the Github API I am able to mount the YAML files in the _data/ folder and list out on the page with text boxes for editing. This obviously won't work for very large YAML files, but for sensibly structured data, kept in my small bits it will work just fine.

Once I'm done working with the data, I can save the form, and I wrote some JavaScript that traverses the form, updating the YAML file using the Github API. The trick is that this form, reading from the YAML file, and writing to it via the API isn't allowed unless you pass in a valid personal token from a Github user who has access to the underlying repository.

My 100% Client Side YAML Editing Micro Tool
That is it. This is my base template for any micro tools I build that will allow for reading and writing YAML files that are stored in the _data folder. It may not seem like much at first glance, but it opens up for me a wealth of new possibilities when it comes to data-driven tooling and collaborative projects that run on Jekyll (not just Github Pages, as Jekyll can be hosted anywhere).

First, I'm offloading application operations to Github. Sure Github has some issues from time to time, but I've been running 100% on them for over two years, and it is more than sufficient for most needs. Github scales and secures my applications, and I don't have to be concerned with keeping the infrastructure up and running--I just have to keep track of my personal tokens.

Second, these apps are self-contained and forkable. Anyone can fork it, grab their own personal token, and get to work managing the YAML core of their application. This is important to me. I like my tooling like I like my APIs, little, forkable and disposable, and truly open source tooling that anyone can put to use.

This is just  base template prototype. I'll come up with more sophisticated versions soon. I just wanted to get the base template for running apps 100% on Github together, then this simple example of reading and writing YAML data from a _data folder before I moved on. I have a couple of API micro tools I want to develop in the area of API design, and I needed this functionality to make it all come together.

The base micro tool template and this base micro tool YAML template are both on Github.


My Forkable Base For Building Apps That Run 100% On Github

Github provides a very powerful platform for developing applications. When you use the base Github functionality, in conjunction with Github Pages, and the Github API--some pretty interest approaches to application deployment emerge.

I learned this approach from Development Seed while working with the White House to open up data across federal government agencies, but is an approach I have evolved, and improved upon while developing what I am going to call Github micro tools.

My Github micro tools run 100% on Github, using Github Pages as the front-end, the Github repo as a backend, and the Github API as the communication between--with Github OAuth as the security broker of who can put the application to work.

I needed to use this approach across several different micro tools, so I thought I'd create a base template that I can use as forkable base for these tools I'm building, while also sharing the approach with others.

Apps Running 100% On Github

I like my apps like my APIs--small and reusable. Building applications that run entirely on Github makes sense to me because it is focused on developing apps that anyone can fork and put to use under their own account--relying on Github to do all the heavy lifting, and cutting out the middleman (me). Each micro tool runs as a Github repository, which comes with all the benefits of Github like versioning, social coding, issue management and much more. You can fork my project on Github, and begin using within your Github user account or orgnization.

Github Pages As Application Front-End

One of the interesting features Github provides with each repository is the ability to launch a simple static site using Github Pages. I use these static project sites to run all my API project and is something I have been evolving it to be a front-end for this approach to providing micro tools. Github pages provide a simple place to put al my applications, where I can store and manage in a very static, secure, and stable way (well the security and stability is offloaded to Github).

Static Jekyll Application Front-End

Jekyll provides a simple, static way to help tame the front-end of the applications I am building. The static content management system provides tools for managing the look and feel of each application, the pages within, and allow me to have a blog if I want. Additionally, Jekyll provides a YAML and JSON core, which when combined with Liquid and JavaScript, opens up to some pretty interesting opportunities for building applications.

Github API As An App Connector

With the base of an application, I am using the Github API as the connector for reading and writing data and content to the base Github repository for this application, in addition to relying on the native features available in Jekyll, and Liquid. The API allows any application to access its underlying data store when a user is properly authenticated using a Github personal OAuth token.

Github OAuth for Authentication

To allow this application interaction to securely occur I am relying on Github OAuth as the gatekeeper. For this example, I am using a Github personal tokens retrieved from within any Github account, instead of using a proxy or service like because I want this solution to be forkable and self-contained. Your tokens will not give you access to this application when it exists under my Github account, but if you fork it, your tokens will give you access to only your forked version. All you do is pass a token into this page using ?token=[your token here], and the API will allow for writing to the underlying repository.

Cookie.js To Store The OAuth Token

Once the OAuth token is passed into the URL I use cookies.js to store the token for use across all potential pages of a micro tool. This approach helps prevent it being included in any links and passed between pages. Once each cookie expires, the user is required to pass another valid token in through the URL to set the cookie again, opening up API access to the applications backend. This project is meant to be interactive.

Github.js To Communicate With API

With a valid OAuth token, I use Github.js as the client side JavaScript client for interacting with the Github API. While Github.js allows for using almost all available API endpoints, most application functionality will be just about reading and writing YAML and JSON files using repository paths. For most of the application functionality, I will rely on Liquid for reading YAML and JSON data, and Github.js for writing of data to the underlying repo using the Github API. If you have a valid Github OAuth token passed in and have access to the Github repository for this application.

Forkable Base For Apps That Run 100% On Github

I hope this provides a base project that demonstrates what is possible when you build applications on top of Github. I am going to fork it and build another prototype that reads and writes to a YAML file in the _data folder for the underlying repo, exploring what is possible when it comes to using Github as a data-driven micro tool platform.

The code that makes this happen is pretty simple, and the Github repository is meant to be pretty self-contained, and here are list of technologies at play here:

You can find the front for this app at, and the repo behind this project over at my Github account. Have fun, and feel free to submit any issues if you have any questions or comments.


Thursday, September 22, 2016

My Response To People Trying To Sell Me Email Lists of Corporate Users

If you are in the business of technology like I am, you probably get the random emails from people trying to sell contacts from leading technology companies. They are usually pretty savvy at getting past SPAM filters and are persistent at trying to sell the information of leading companies from SalesForce, Microsoft, Amazon, and pretty much any other company out there.

Like most of the SPAM in my inbox, I just flag and move on, which I have done with these types of emails, but they keep coming, so I crafted a template email to send back to them, like I do with many of the solicitations I get (I have a folder of templates).

Thank you for your unwanted solicitation. I hope you are doing well (I do not care, but this is what you do right?) I'm in the business of being a human in the technology space, not making a profit off of selling other people's information, but hell there is good money in it right?

Would you be interested in buying the contact information of people who sell other people's contact information? You see, I track the IP address and other details of every email I receive trying to sell me contacts. I then conduct research on who they are, discovering their name, home address, phone number, and where their children go to school.

If you think this is a good thing to do, and would like to buy these from me, please send me $$$$. Cause I'm greedy bitches. Please go the fuck away and get a life.


Kin Lane

I'm sure many of these people are just poor people doing the bidding of some pretty sleezy people who think this is a good business idea. I can't help but push back, especially when they get through the filters and take moments of my time away. Even though it is just seconds, it is still my valuable time.

I know that not everyone can find employment that is ethical and worthy of being proud of, but maybe I can scare a handful of folks to look for employment elsewhere, and move on. If not, at least I'm having fun, and I feel a little better.


Tuesday, September 6, 2016

Keeping Things Static With My Public Presence To Reduce Security Friction

I've been pretty vocal about running the API Evangelist network of sites on Github Pages, ever since I first started doing it back in January of 2013. Back then I was just playing around with the concept, but in 2016 my entire public presence runs on Github Pages.

There are several reasons I do this, starting with the simplicity of static website solutions like Jekyll, something that quickly evolves when you marry with the social approach to managing code that is Github. I like managing my sites this way, but the primary reason I migrated to this setup was because of security. After a couple of online events where I stepped up to defend my girlfriend Audrey Watters (@audreywatters) I woke up to all of my sites being down, by some friendly hacker.

I admit I don't have the best security practices. I have the skills to do it, but everything I do is public, so security is really not a concern. I just don't want my shit taken down by someone, or have my readers experience an outage. I got backups of things up the wazoo, in three different locations, including a nuclear missile silo in Nebraska. I can restore and rebuild at any point, but I don't like people taking my sites down just because they disagree with me. 

So I moved everything to run on Github a couple years ago. I'll outsource my security to them. All of my API industry research projects have a JSON core, driving the data, content, and API definitions for the APIs I create and keep an eye on--often times there are code samples, libraries, and other open tooling as well. So I'd say that my "websites" meet the criteria of being a worthy project for hosting on Github Pages. All of my research, except what ends up in a PDF, is meant to be open, forkable, and remixable--so Github just works for me.

With this move to being static my world became a dynamic push, instead of a dynamic pull, which significantly reduces the attack surface area for hackers--well except for the part where Github is hosting my sites, and I'm outsourcing security to them. At least it isn't my responsibility, plus I get the network effect of being on Github. When this is coupled with CloudFlare for my DNS, and offloading my DNS security to their experts, I figure I'm coming out ahead when it comes to securing my public presence, and what is most important to me--my research.

I still have my administrative API monitoring system (which is dynamic), something I will be working to further localize on my workstation, and a local server--it doesn't need to be on the Internet all the time. Then, all that is left then is my API stack--a stack of simple web APIs that help me operate the API Evangelist network. I will have to secure my APIs, but it dramatically reduces the publicly available surface area I have to defend, something that helps ensure my static presence will always remain available--even if my APIs go away.

In the current online environment I am not one to pull back from using the cloud after all I have invested in it, but with the volatility that lies ahead, it makes sense to keep my surface area defined, including all domains, and 3rd party services, and reduce the size of it at every turn. When possible, it also makes sense to go static, something that I'm seeing reduce a lot of friction and concern for me when it comes to maintaining my very public online existence.


Friday, September 2, 2016

People Telling Me Markets Will Work Things Out And I Should Not Complain

When I am working to push back on various aspects of the API space, one of the ways people feel they need to push back on me is to tell me that I shouldn't be getting all worked up about it--that markets will work things out. Aside from this being a silly argument about something that I don't really acknowledge as a reality (markets do not work themselves out), I am endlessly fascinated how people wield this like markets are something over there, that does not include me (us). 

I'm pretty confident that markets include me. I'm pretty sure that I (the individual) can have an influence over market outcomes. I think this concept is a tool that the market players equip the sleeping masses with, to keep the average citizen out of the way so that they can profit from market activity unencumbered. We all have a role to play in markets, and my self-appointed role is to help influence the portion of markets being touched by the Internet, and specifically API technology.

If you tell me that "markets will work things out" in the course of our engagement in the API space, you just labeled yourself as being pretty simplistic in your views of how markets and the world works. You just put yourself in the bucket of people who consciously or sometimes unconsciously work on behalf of the "machine" to keep the world compliant, and being good consumers. I'm in the bucket over here, where I believe that markets need constant evaluation, discussion, and push back to make sure they are looking out for humans--which is something I will be working to do until I am dead and buried.


Thursday, September 1, 2016

Real Time Is Often More About What They Desire Than What We Want

There are many definitions of what exactly constitutes "real time". I find it is a very relative thing, depending on who you talk to. When asked, many will respond with push notifications as an example. Others immediately think chat and messaging. If you are talking to developers they will reference specific technology like XMPP, Jabber, and WebSockets.

Real time is relative. It is relative to the situation, and to those involved. I'd say real time also isn't good by default, in all situations. The need for real time might change or evolve, and mean different things in different industries. All of this variance really opens up the concept for a lot of manipulation and abuse.

I feel like those who are wielding real time often speak of the benefits to us when in reality it is about real time in service of what they desire. They want a real time channel to you so they can push to you anytime, and get the desired action they are looking for (ie. click, view, purchase). In this environment, the concept of real time quickly becomes just noise, distraction, and many other negative things--rendering real time to just often being a pretty bad idea. 


Tuesday, August 23, 2016

Fine tuning My Real Time For Maximum Efficiency

I am working hard to fine tune my world after coming back from the wilderness this summer. Now that I'm back I am putting a lot of thought into how I can optimize for efficiency, as well as for my own happiness. As I fire back up the old API Evangelist machine, I'm evaluating every concept in play, a process being used, and tool in production, and evaluate how it benefits me or creates friction in my world.

During the next evolution of API Evangelist, I am looking to maximize operations, while also helping to ensure that I do not burn out again (5 years was a long time). While hiking on the trail I thought A LOT about what is real time, and upon my return, I've been applying this to reverse engineering what is real time in my world, and fine tuning it for maximum efficiency and helping me achieve my objectives.

As I had all the moving parts of real time spread out across my workbench, one thing I noticed was the emotional hooks it likes to employ. When I read a Tweet that I didn't agree with, or read a blog post that needed a rebuttal, or a slack conversation that @mentioned me--I felt like I needed to reply. When in reality, there is no reason to reply to real time events, in real time. This is what it wants, not always something you want.

I wanted to better understand this element of my real time world, so I reassembled everything and set back into motion--this time I put a delay switch on ALL responses to real time events across all my channels. No matter how badly I wanted, I was forbidden to response within 48 hours to anything. It was hard at first, but I quickly began to see some interesting efficiency gains and a better overall psychological well-being.

Facebook, Twitter, Github, and Slack all were turned off and only allowed to be turned on a couple times a day. I could write a response to a blog post, but I wouldn't be allowed to post it for at least two days. I actually built this delay switch into my world, as a sort of scheduling system for my platform, which allows me to publish blog posts, Tweets, Github commits, and other pushes that were often real time, using a master schedule.

After a couple of weeks my world feels more like I have several puppets on strings, and performing from a semi-scripted play. Where before it felt the other way around, that I was a puppet on other people's strings, performing in a play I've never seen a script for.


Monday, August 22, 2016

The Blockchain As An Economic Engine For The Cybersecurity Industry

I am slowly getting back into the routine of doing my weekly roundups. It has been a while since I published any, even though I regularly do the work. While I was going through this week's roundup of items I curated, I thought some of the blockchain related goings on were particularly interesting.

Not sure about you but I can't help but think that has the makings of a pretty interesting economic engine for the cybersecurity industry. You have government hackers, organized hackers, rando hackers, concerns around having enough talent, investors pouring money into the space, and 1000lb pound gorillas making firing up their digital factories. 

I'm guessing that blockchain and cybersecurity are going to go hand in hand, and be a very lucrative endeavor for a select few.


Thursday, August 18, 2016

You Better Collect All The Data Because You Might Need It Some Day

I recently read a couple of articles that focused on the data collection practices of businesses, where the moral of the story was that you should be collecting all the data you possibly can, even if you don't need it because you never know what you'll need in future. This is the popular perspective of a significant portion of the data community, which naturally has transferred to the world of APIs through a natural association.

While this might be tempting, and even seem logical at times, I recommend you stop and think about it deeply. The NSA is employing the approach, and leading tech companies like Google, Facebook, and others are thinking in similar ways. Pretty much saying that if you have all the data, you will have all the knowledge--something that really hasn't ever been proven, remaining a constant fantasy of technologists.

Imagine the person who obsessively collects everything, thinking some day it will be valuable. Often times this is harmless if some of it contained hazardous material (ie. mercury, lead) that may have been considered safe at one point, but now you have large quantities of it--not good, and costly implications. Imagine if, at some point, you cross over some public zoning, safety, and other regulatory areas, without knowing it. Consider how the world has shifted and changed in the last 50 years, and how rapidly things have "seemingly" changed in the last 20 years, when it comes to public opinion--what if opinions on data gathering practices change drastically in the near term future?

With the NSA, and leading tech companies behaving pretty badly with their data collection strategy, pushback from other countries, companies, institutions, and the average citizen has already begun. Do you really want to have EVERYTHING stored in your data warehouses when this happens? Data you can't actually verify that you need actually operate your business? What will your customers, partners, and shareholders think? What will public opinion be of your brands?

I haven't even touched on the security concerns of storing all of this way of data gathering. There are numerous very serious considerations on the table, that should always be included in decision around just exactly what data we gather, store, and what we should just let be lost in the layers of time.


Tuesday, August 16, 2016

Humans Are Always The Weakest Link When It Comes To Securing Our Bits & Bytes

I added a specific project for aggregating and tracking on vulnerabilities in our online infrastructure, in addition to my existing security and cyber security research. Not all of the vulnerabilities I curate are API specific, but I find it helps increase my overall awareness of security related issues and I find it useful to thinking through the possibilities when it comes web vulnerabilities being applied to APIs. 

Across these three areas of my security research, the one common pattern I see across the security landscape is that the humans are always the weakest link. Almost all of the breaches I read about occur because of some human, being well human, and allows for some often well-known exploit to be penetrated. Hacking systems is less about knowing the tech exploits, then it is about knowing and maximizing the human exploits--as we are always the weakest link.

I use this awareness when I'm evaluating the promise of any security-focused solution I come across. If the solution prescribes more technology, to help us secure the technology we have--I'm guessing it is most likely smoke & mirrors about 95% of the time. If the solution offers something that helps address the human variable in the equation, and augments this reality, making us all more security minded, and ulitmatmely security literate--the chances it will make a difference increases in my opinion.


Monday, August 15, 2016

Using Github Repos And Jekyll As A Data Store

Github repositories are the heart of all of my API research. Each of the 200+ areas of my research lives as an individual repository, and I publish most of my raw research here as JSON, and YAML--then make it viewable, and explorable using JavaScript and HTML. Github + Github Pages + Jeklyll is what makes all of this possible.

I have been working professionally with databases for over 25 years--I am a database guy. From 1997 through 2007 I was heavily dependent on my SQL Server database(s). From 2007 through 2017 I am heavily dependent on my MySQL database(s). I predict from 2017 through 2022 I will be heavily dependent on my JSON and YAML data stores available via Github and my own server infrastructure.

Using Github repositories as a data store will not replace my central database infrastructure, but it will augment it significantly. Much like dynamically publishing HTML documents from databases has dominated my web evolution, the dynamic publishing of JSON and YAML documents is what drives much of my public presence during my API evolution. Github allows me to drive the publishing of this data using Github Pages, while using Git to maintain a snapshot of my data stores at any point in time.

The static nature of my data stores is efficient, in that they load fast, and leverage simple web technology (HTML, JavaScript, CSS) to accomplish its objective, whether that is delivering HTML to humans, or JSON and YAML to other systems / applications. The publish / cache nature of these representations of my data works well for my approach to storytelling. I can keep my research moving fast, keeping pace with the fast-changing landscape, or I can employ them as a snapshot that stays static forever, something I may never update.

I increasingly find people don't grasp how it is that I use Github to run my API Evangelist, and the potential of Jekyll and Github when it comes to managing data, especially when it is in the service of storytelling on the web. It's not an approach I recommend everyone put to work, but as a database person, I think everyone should have Github and Jekyll as a data store in your toolbox


Tuesday, August 9, 2016

Ignoring Bad Behavior Then Complaining When Government Regulates

I feel the drone space is a poster child for the overall technology space for me lately. I'm heavily influenced because it is what I have been doing for the last couple months, but as I turn my head back to paying attention to mainstream tech, what I'm seeing with drones has taught me lessons that I'm finding apply very nicely to the wider technology landscape.

I read three separate articles this week where authors were outlining what is next for drones, and what is holding the industry back, and all three mentioned government regulations as being the number one thing holding drones back. Which is interesting to me because I do not feel the requirement to register my drone is holding us back. What I do feel looms over the whole space is the badly behaved drone operators out there--which naturally the coming regulations and current concerns are in response to.

When you do encounter rules about drones, or pushback from people out in the field, it is in direct response to drone operators behaving badly, yet you don't see the drone industry going out of their way to police, or reign the industry in. You do see manufacturers like DJI building in some limitations when it comes to forest fires, airports, and other no-fly zones, but you don't see the average drone blogger or drone operator telling each other to be a responsible drone operator so you don't screw this up for everyone.

I see this as an inherent flaw in how markets work. People who love markets, love to bitch about government regulation, but rarely ever work to police themselves, or regulate the bad things that regulation are often responding to. In fact, I've heard people defend bad behavior as, "it's not illegal yet", and "if I didn't do it my competitors will". Then fall in line with the other anti-regulator rhetoric when laws are put in place limiting what people can do in their industry. 

Do not get me wrong. I am not pro-regulation. I have a realistic understanding around why we need healthy regulations and enforcement to help balance market activity, but I am not pro-regulation just for the sake of more government. It would make more sense if as an industry we have more ethics, and we worked to educate and police each other, helping set a healthy tone, so the government wouldn't need to step in. Actually, as I write this, I realize how badly behaved our own government is being when it comes to drones. Uggghh!

I predict we will see this with every new area of technology out there. The overeager entrepreneur(s) go too far, can't control themselves with their greed, and do things to make money that is ultimately questionable, then they bitch and complain when the government steps into course correct the behavior. So much of what we are doing is brand new in tech, and when you bundle that with young millennials, you get a rich environment for thinking everything is new, and that we are entitled to do whatever I want--establishing a pretty dangerous cycle. 

I'm applying what I've been learning from watching the drone space, to other areas like healthcare and education data, and other important areas where I am seeing APIs being used for some pretty shady stuff. I am seeing folks make claims it is for healthcare or education when it is really about getting their hands on users data that they can sell on the open market--making for some prety troubling stuff.


Working To Avoid The Drowning Effects Of Real Time

One thing I'm experiencing as I come out of my Drone Recovery project is the drowning effects of our real-time worlds. I am talking about the desire to stay connected in this Internet age, and subscribe to as many possible available channels (ie. Facebook, Twitter, LinkedIn, RSS, etc.), and more importantly the tuning in, and responding to these channels in real time.

You hear a lot of talk about information overload, but I don't feel the amount of information is the problem. For me, the problem comes in with the emotional investment demanded by real-time, and the ultimate toll it can take on your productivity, or just general happiness and well-being. You can see this play out in everything from expectations that you should respond to emails, all the way to social network memes getting your attention when it comes to the election, or for me personally, the concerns around security and privacy using technology.

The problem isn't the amount of information, it is the emotional toll of real-time. I can keep up with the volume of information, it's once I start paying the toll fee associated with each item, that it begins to add up. I feel the toll fee is higher in the real-time lane than when you do on your own schedule. The people who demand I respond to emails, and be first to the story have skin in the game, and will be collecting a portion of the toll fee, so it is in their best interest to push you to be real time.

Sure, there are some items that will be perishable in all of this. I am not applying this line of thinking across the board, but I am prioritizing things with this in mind. In an increasingly digital world, the demands on our time are only going to increase. To help me to keep from drowning, I'm going to get more critical about what I accept into my world in a real time way. My goal is to limit the emotional toll I pay, and maximize my ability to focus on the big picture when it comes to how technology, and specifically APIs are impacting our world.


Losing Control Over Our Digital Self When So Many Domains Take A Piece

I find myself even more aware of the demands being placed on our lives through Internet-enabled technology after spending two months in the wilderness, away from my computer and cell phone. As I fire up my tools for monitoring the API space, the assault on our digital self by the tech community streams by on the scream like a scene from the Matrix movie.

One of the tools I operate regularly is called Charles Proxy. I use it to automatically map out the APIs I am using, helping me map out the surface area of common APIs. On select days I will keep this running in the background, routing all my mobile, web, and desktop activity through the proxy. Every five minutes it dumps an XML file of my activity to my local Dropbox folder. Once files are synced to the cloud my API monitoring system grabs this history and generates OpenAPI specification for any APIs, with one by-product of all of this is I also get a record every single domain I touched over the course of the day.

I pulled a sampling of this traffic, grouped by each unique domain, and generated this tag cloud. There are 306 domains included in this sampling, with a maximum of 250 showing in the tag cloud, but the domains that float to the top, achieving a significant portion of my attention, tell an interesting story--there is a lot to consider here, but three significant stories stand out for me.

Who Gets Most My Attention On Regular Basis
This is all traffic from the websites I visit, as well as my desktop and mobile applications, so you see the core of my existence spent on my Apple devices, and that I still live in a very Googley world, while doing much of my communication via on Twitter, Slack, and Skype. I do a lot of Googling, as the majority of my days are spent researching a variety of topics, and since I opt to leave advertising unblocked, you also see the fingerprint of Double Click when it comes to ad networks also attempting to get my attention.

Percentage Of My Attention Spent Within My Domains
While Google and Apple still command a big portion my attention, it makes me happy to see both and present in this tag cloud--showing a healthy "reclaim your domain" balance to my world. It is important to me that as much of my time as possible is spent operating within my domain. I will never be able to operate 100% on my own property, but ensuring that my domains occupy top ten slots on this map is critical to me operating a successful business, generating revenue from my hard work, and fending off all of these domains looking to own a piece of my digital self for their benefit.

Overall Volume Of Domains Vying For My Attention
This is just a sampling of the domains that are vying for my attention on a daily basis. At some point, I'll publish a more realistic daily, weekly, and monthly sampling hopefully helping paint a more complete picture. However, I feel this sampling does show the scope of assault that occurs daily on our digital self. All of these companies want a piece of my digital self, not because they care about me, or what I am doing, but because they want to generate revenue from this little piece of my digital self, and any activity that occurs.

A significant portion of what I do each day is dedicated to making sure that I clearly define who is Kin Lane, and the API Evangelist, and capture as much of exhaust generated in the form of blog posts, tweets, images, video, and other bits and bytes. This is how I define my brand, publicize my work, and retain as much control over what I do as I possibly can. Helping me better make a living from my work. The more I define and defend myself from these domains, the more I keep for myself, enabling me to maintain control over the digital version of myself.

We only have a few hundred years under our belts when it comes to defining our physical self, our rights, and the boundaries of our public personas. We only have a few years under our belts when it comes to defining our virtual self, our rights, and the boundaries of our virtual public personas. What is even scarier is that increasingly the predatory behavior of these domains in an online world is being extended into our physical worlds through home automation, connected cars and cities, drones and other ways the Internet of Things (IoT) that are penetrating our personal, professional, and industrial worlds.

As I look at the logs of these domains who are demanding a piece of my virtual self each day, I can't help but feel like the majority of us will lose control over our digital self, before we ever fully get the opportunity to fully know ourself--when so many domains take a piece of us each day.


On Being SMART (Surveillance Marketed As Revolution Technology) And Greedy

I love Evgeny Morozov's (@evgenymorozov) tweet defining the acronym SMART as Surveillance Marketed As Revolutionary Technology. It has provided me with a wealth of material for my alternate storytelling channels, and provides an excellent litmus test to apply to companies I come across during my monitoring of the API space.

As I'm reading do smart devices mean dumb security, out of Defcon this year, I'm reminded of his funny, yet also very troubling definition of SMART. I'm coming across an increasing number of connected devices who have incomplete API programs available. Meaning APIs are present, available on the open Internet, but required documentation, support, and other essential resources are missing--which like mobile, tends to often mean security and privacy considerations are incomplete as well.

This last week I talked about how venture capital investment can provide some incentives that are at odds with healthy, stable, consistent, and secure API operations. You see this play out with mobile devices, where a platform is so focused on the mobile app so heavily, they pretend the web APIs behind are invisible, which is also a practice I am seeing rapidly evolve with the Internet of Things (IoT).

Companies are racing to connect everyday objects to the Internet because they want to convince consumers to buy a new product, that will give them access to the valuable data that will be generated (a precedent set by the mobile evolution). In the race to create this new breed of products that consumers will want, and generate this new, highly valuable data, the willingness to secure these new data streams, and protect the safety and privacy of consumers is often very low on the list of priorities. 

As stated in the BBC article out of Defcon, these devices will become a playground, of hackers, whatever their motivations might be. The average person will be unknowingly building out the Internet in this very unstable fashion, giving away their data, privacy, and of those around them. The greed behind the pushing of SMART objects into our personal and professional worlds will happily continue if they are given continued access to this extremely valuable data, and surveillance exhaust. 

I'm not convinced that corporations, institutions, the government, or individuals will all be up to the task when it comes to securing all of this tech we are inviting into our worlds, not when there are so many badly behaved, poorly incentivized players willing to build this dystopian version of the Internet out. This will not play out well...