Monday, August 18, 2014

All Government Should Have A Social Media Directory API

I was just looking for a list of Twitter accounts for the City of Chicago, and I came across the standard social media directory page, you will find at most city, county, and state government websites.

After looking at the list, I had a decision to make. I could either manually enter each of the twitter accounts into my CRM, or I could write a script to scrape the page, harvest the cotent and put into my CRM for me--I prefer to write scripts over data entry any day.

Here is the resulting JSON:

It got me thinking that every government entity, whether city, county or state should have a social media directory API like you find for the federal government. We should never have to scrape a list of our government social media accounts.

Once their is an API for each government entity, we can then drive existing websites, and other current locations with the API, as well as use in any new web or mobile apps.



from http://ift.tt/VBg1yd

Wednesday, August 13, 2014

Never Looking Out The Window, Let Alone Trusting Anyone External Of The Department of Veteran Affairs

I haven't written much about my experience last summer as a Presidential Innovation Fellow (PIF) at the Department of Veteran Affairs (VA). I have lots of thoughts about experience at the VA, as well as participating in the PIF program, and I choose to trickle these thoughts out, as I continue to make sense of them, and bring them into alignment with my overall mission as the API Evangelist.

I was given three projects when I started work at the VA: 1) Inventory data assets 2) Inventory web services 3) Move forward D2D, a forms web service that would allow VA hospitals and Veteran Service Organizations (VSOs) to submit forms through the claims process on behalf of veteran.

The most prevalent illness I witnessed across these three efforts was a unwillingness to trust outside groups (even VSOs and hospitals), and a lack of desire to share data and resources to anyone outside of the VA (ironically except contractors), to the point where groups seem to take defensive positions around what they did on behalf of our veterans. This culture makes for some pretty toxic environments, I personally feel contributing to much of the problems we’ve seen bubble up into the public media space of late.

While work at the VA you constantly hear about the VA claims backlog, and how we need to optimize, but when you bring up sharing data, or resources to other federal agencies, trusted external partners like hospitals, and VSO’s you get pushback with concerns of security, personally identifiable information (PII), etc. All which are valid claims, but there are proven ways to mitigate these risks through Identify and Access Management (IAM), which is another whole post in itself. You start feeling crazy when you get pushback for arguing that a doctor should be able to submit disability questionnaires via an iPad application, that uses an existing VA API, in a way that securely authenticates the doctor.

As a result of other system, cultural issues, and mistakes made in the past, VA employees and contractors are extremely adverse to opening up to the outside world, even if it can help. I kept hearing references to the 2006 data breach as a reason to keep systems locked down, where an employee brought a laptop home, affecting 26M individuals. This horror story, plus a variety of other cultural issues are keeping VA staff from accepting any new way of thinking, even if it could help reduce their workload, improve the claims process, and better serve the veterans and their families.

This is a pretty fundamental flaw in how large government agencies operate, that are in conflict with the solutions API can bring to the table. I don’t give a shit how well designed your API is, in this environment you will fail. Period. I do not think I will ever fully understand what I saw at the VA, while a PIF in Washington DC, but I feel like I’m finally reaching a point where I can at least talk about things publicly, put my thoughts out there, and begin my experiences as a PIF at the VA into my overall API Evangelist message.



from http://ift.tt/VlnQIc

The Color Of Money When Deploying APIs At The Department Of Veterans Affairs

I haven’t written much about my experience last summer as a Presidential Innovation Fellow (PIF) at the Department of Veteran Affairs (VA). I have lots of thoughts about experience at the VA, as well as participating in the PIF program, and I choose to trickle these thoughts out, as I continue to make sense of them, and bring them into alignment with my overall mission as the API Evangelist.

I just wrote a piece on replacing legacy systems at the VA using APIs, where one of the systemic constraints in place that restricts the modernizzation VA systems using API is purely about money, and more specifically the color of money. I won’t bore you with the detail of the topic, but in short the color of money is: Money appropriated for one purpose cannot be used for a different purpose, according to the Purpose Act (31 U.S.C. § 1301).

In short, if $50M was given to sustain an existing legacy system, and that money cannot be re-appropriated, and applied to the newer system, what incentive is there to ever get rid of legacy VA systems, or modernize any government system for that matter? Whether it is usng APIs, or anything else. Newer approaches to using technology are difficult to accept when you are working hard to accomplished your job each day, but if you already have $50M in budget to a specific job, and that job won’t go away, unless you choose to make it go away, guess what happens? Nothing changes…hmmm?

As I said before, I don’t give a shit if you deploy APIs from the ground up, or excite via a presidential mandates from the top down, if you have incentives in place for employees to do the opposite, based upon how money is allocated, you won’t be changing any behavior or culture—you are wasting your energy. I don’t care how excited I get any one individual, team or department about the potential of APIs bundled with new systems, if it means their job is going away—too bad, nobody will give a shit.

Think about this scenario, then consider that $1,810M of the 3,323M overall VA budget (54%) is sustainment. Granted this isn't all IT system sustainment, but still over half of the budget is allocated to keep shit the same as it is. Imagine what environment this creates for the acceptance of modernization efforts at VA.

This is a pretty fundamental flaw in how large government agencies operate, that are in conflict with the solutions API can bring to the table. I don’t give a shit how well designed your API is, in this environment you will fail. Period. I do not think I will ever fully understand what I saw at the VA, while a PIF in Washington DC, but I feel like I’m finally reaching a point where I can at least talk about things publicly, put my thoughts out there, and begin my experiences as a PIF at the VA into my overall API Evangelist message.



from http://ift.tt/1yy49tm

Taking Web Service Inventory At The Department of Veteran Affairs

I haven't written much about my experience last summer as a Presidential Innovation Fellow (PIF) at the Department of Veteran Affairs (VA). I have lots of thoughts about experience at the VA, as well as participating in the PIF program, and I choose to trickle these thoughts out, as I continue to make sense of them, and bring them into alignment with my overall mission as the API Evangelist.

One of the jobs I was tasked with at the VA as a PIF, was taking inventory of the web services within the agency. When asking folks where these web services were, I was directed to various IT Leads on different groups, each giving one or two more locations I could look for word, excel, or other PDFs talking about web services used in projects and known systems. Most of the time these were redundant lists, pointing me to the same 5 web services, and omitting 20-30 that were actually in use for a project.

At one point I was given the contact information for a lady who had been working for two years on a centralized web registry project, that would be the holy grail of web service discovery at the VA. This was it! It was what I was looking for, until I sat in on the weekly call where this project got a 10 minute update, demonstrating that the registry was still about defining the how and what of the registry, and never has actually moved to cataloging actual web services in the wild at the VA. ;-(

Then one day I was introduced to a gentlemen, that was in a back office, in an unmarked cubicle, who seemed to know where most of the web services were, the one difference with this person was that they were a contractor, and not an employee. One thing you hear about, but do not experience fully until you work in government is the line between government employee and contractor—in meetings, and conversations you know who is who (it is pretty clar), but when it comes to finding APIs, I’m sorry the contractors know where everything is at. This contractor had some pretty interesting lists of what web services were in operation, where they were, and which groups at VA owned them, including up to date contact info. These contractors also had their finger on the pulse of any project that was potentially moving the web services converations forward, including the global registry.

Overall I was surprised at how IT groups knew of their own web services, could care less about the web services of other groups, but contractors new where all the web service were across the groups. I was closing in on 500 web services on my list before I left during the shutdown, and I wonder how many else I would have found if I kept up the good fight. This mission had nothing do with API, except that web services are often compared to APis, I was purely taking inventory of what was already in place, a process that went far beyond just technical inventory, and shed light on some serious business and political flaws within operations at the VA.

This is a pretty fundamental flaw in how large government agencies operate, that are in conflict with the solutions API can bring to the table. I don’t give a shit how well designed your API is, in this environment you will fail. Period. I do not think I will ever fully understand what I saw at the VA, while a PIF in Washington DC, but I feel like I’m finally reaching a point where I can at least talk about things publicly, put my thoughts out there, and begin my experiences as a PIF at the VA into my overall API Evangelist message.



from http://ift.tt/1yy45tC

Replacing Legacy Systems With APIs At The Department Of Veteran Affairs

I haven't written much about my experience last summer as a Presidential Innovation Fellow (PIF) at the Department of Veteran Affairs (VA). I have lots of thoughts about experience at the VA, as well as participating in the PIF program, and I choose to trickle these thoughts out, as I continue to make sense of them, slowly bringing them into alignment with my overall mission as the API Evangelist.

On deck are my thoughts on replacing legacy systems with APIs, at the Department of Veteran Affairs. In the “real world”, one of the motivations for deploying APIs, is to assist in the evolution, and replacement of legacy systems. The theory is, you have older system that needs to be replaced, and you can wrap in a modern web API, and slowly switch any desktop, web, mobile or other client system to use the new API—then you build out newer backend system, and make the switch in the API layer from the legacy to the newer backend system, leaving everything operating as expected. API magic!

I'm used to hostile environments to this way of thinking, but most times in the private sector there are other business objectives that can be leveraged to get legacy system owners to get on board with a shift towards API deployment—I saw no incentive for this practice in the VA environment, where in reality there are incentives for IT, and business owners, as well as 3rd party contractors to keep legacy systems in place, not replace them. There are a variety of motivation for existing VA workers to keep existing systems in place, ranging from not understanding how the system works, to budgetary restrictions on how money flows in support of this pro-sustainment culture.

Here is an example. There is old database for storing of a specific type of document, a database that X amount of existing desktop, server, web, or even mobile systems depend on. If I move in and create an API, that allows for reading and writing of data into this database, then work with all X of the legacy systems to use the API instead of a direct database connection—in theory I can now work to dismantle the legacy database, and replace with a newer, modern backend database. In most IT operations, this approach will then allow me to replace, modernize and evolve upon an existing legacy system. This is a common view of technologists who are purely looking through a technical lens, and ignoring the existing business and political constraints that exist in some companies, organizations, insitutions and government agencies. 

In the real world, you have staff, budgets, workflows, and decision making processes that are already in place. Let’s say this legacy database had $50M a year allocated in budget for its operation, and I replace with a newer database, plus API, which operates for $10M a year—you’d think I get to reallocate the staff, budget, and other resources to developing newer mobile apps, and other system with my newly liberated $40M. Right? Nope…that money goes away, and those people have no interest in moving from supporting a COBOL system, to supporting a new MongoDB + Node.js API that is driving a Swift iPhone app. ;-(

This is a pretty fundamental flaw in how large companies, organizations, institutions and government agencies operate, that are in conflict with what an API philosphy can bring to the table. I don’t give a shit how well designed your API is, in this environment you will fail. Period. I do not think I will ever fully understand what I saw at the VA, while a PIF in Washington DC, but I feel like I’m finally reaching a point where I can at least talk about things publicly, put my thoughts out there, and begin to weave my experiences as a PIF at the VA into my overall API Evangelist message.



from http://ift.tt/1vKk0c8

Friday, August 1, 2014

Please Provide An Easy To Copy Logo And Description Of What You Do

I spend a lot of time looking at the websites of companies who are doing cool things in the space. I track on about 2000 companies in the API sector, and as part of this monitoring I add the company name, logo, brief description and usually their Twitter and Github account to my systems on a regular basis.

Using this information I will publish a company as part of any research I do across multiple API business categories like API design, deployment or management. If a company is doing something interesting, I need to be able to quickly find a good quality logo, and short, concise description of what the company does—something that is easier said than done.

You'd be surprised how hard it is to grab a logo when its in the CSS, and finding a single description of what the company does, something I usually have to go to Crunchbase or Angelist to find, and often have to write myself.

If you want to help people talk about your company and what you are doing, make it easy for them to find a logo, and description. Please don’t make us click more than once to find this information--trust me, it will go a long way in helping bloggers, and other people showcase what you are up to.



from http://ift.tt/1ncPQUX

Easy To Copy Logo And Description Of What You Do

I spend a lot of time looking at the websites of companies who are doing cool things in the space. I track on about 2000 companies in the API sector, and as part of this monitoring I add the company name, logo, brief description and usually their Twitter and Github account to my systems on a regular basis.

Using this information I will publish a company as part of any research I do across multiple API business categories like API design, deployment or management. If a company is doing something interesting, I need to be able to quickly find a good quality logo, and short, concise description of what the company does—something that is easier said than done.

You'd be surprised how hard it is to grab a logo when its in the CSS, and finding a single description of what the company does, something I usually have to go to Crunchbase or Angelist to find, and often have to write myself.

If you want to help people talk about your company and what you are doing, make it easy for them to find a logo, and description. Please don’t make us click more than once to find this information--trust me, it will go a long way in helping bloggers, and other outlets showcase what you are up to.



from http://ift.tt/1ncOdqc

Why I Post Stories To My Blog(s) Like I Do

I get a lot of folks who tell me how they love my storytelling across my blog(s), but sometimes they find it hard to keep up with my posting style, emphasizing that on some days I post too much, and they just can't keep up.

Since I just got home from API Craft in Detroit, and have a mind full of ideas, and Evernote full of have baked stories, and I feel a storytelling spree coming on, I figured I'd kick it off by telling the story of why I blog the way I do.

First, I blog for me. These stories are about me better understanding the complex world of APIs, and the storytelling process forces me to distill my thoughts down into smaller, more understandable chunks.

Second, I do not feel I can move on from an idea until it has been set free—meaning it is published to the site, and tweeted out. Only then can I detach and move on to the next thing on my list. I've tried scheduling, and all of that jive, but they only conflict with my emotional attachment to my stories.

Third, there is an emotional attachment to each one of my stories. This makes my storytelling, about me, not pageviews, SEO, or any other common metric in the blogosphere—my blogging is about me learning, and sharing these ideas openly with the world, everything else is secondary.

After all of that, my blogs are about you the audience, and helping you understand the world of APIs. I’m sorry if my storytelling flow is non-existent some days / weeks, and then overwhelming other days. I leave it up to you to bookmark, and flag for consumption later.

There are some mechanisms built into my network of sites to help you with this process. The blog uses Jekyll, which has a nice next / previous feature on the blog posting, so if you visit the latest blog, you can just hit previous until your head explodes. (I’ve seen it, it is messy)

Also all of my curation of stories across the API space, and my analysis eventually trickles down to all my research sites. So anything I read or write about API design, which eventually be published to the API Design research site. So you can just make regular rounds through my core research to catch up on what I read, think and publish—I do this regularly myself.

This is just a little insight into my madness, and it is just that—my madness. Welcome to it, and I hope you enjoy.



from http://ift.tt/1xLVpzt

Tuesday, July 22, 2014

Reclaim Your Domain LA Hackathon Wrap-up

I spent the weekend hacking away with a small group of very smart folks, at the Reclaim Your Domain Hackathon in Los Angeles. Fifteen of us gathered at Pepperdine University in west LA, looking to move forward the discussion around what we call “Reclaim Your Domain”. This conversation began last year, at the #ReclaimOpen Hackathon, continued earlier this year at Emory University, and we were looking to keep the momentum building this weekend at Pepperdine.

Here is a breakdown of who was present this weekend:

  • Jim Groom - University of Mary Washington (@jimgroom)
  • Michael Caulfield - WSU Vancouver - http://hapgood.us/ - (@holden)
  • Michael Berman - California State University Channel Islands (@amichaelberman)
  • Chris Mattia - California State University Channel Islands (@cmmattia)
  • Brian Lamb - Thompson Rivers University (@brlamb)
  • Timothy Owens - University of Mary Washington (@timmmmyboy)
  • Mikhail Gershovich - Vocat (@mgershovich)
  • Amy Collier - Stanford (@amcollier)
  • Rolin Moe - Pepperdine (@RMoeJo)
  • Adam Croom - University of Oklahoma (@acroom)
  • Mark C. Morvant - University of Oklahoma (@MarkMorvant)
  • Ben Werdmuller — Withknown (@benwerd)
  • Erin Richey — Withknown (@erinjo)
  • Kin Lane — API Evangelist (@kinlane)
  • Audrey Watters — Hack Education (@audreywatters)

If you are unsure of what #Reclaim is all about, join the club, we are trying to define it as well. You can head over to Reclaim Your Domain, or you can also look at Reclaim Hosting, and the original Domain Of Ones Own at University of Mary Washington which has provided much of the initial spark behind the #Reclaim movement, for more information. Ultimately, #Reclaim will always be a very personal experience for each individual, but Reclaim Your Domain is primarily about: 

Educating, and empowering individual to define, reclaim, and manage their digital self

The primary thing I got out of this weekend, beyond the opportunity to hang out with such a savvy group of educators, was the opportunity to talk through my own personal #Reclaim process, as well as my vision of how we can use Terms of Service Didn’t Read as a framework for the #Reclaim process. The weekend re-enforced the importance of APIs in higher education, in not just my own API Evangelism work, but contributing to the overall health of the Internet (which I will talk about in a separate post).

To recap what I said above, there are three domains, that you will need to learn about my current #Reclaim work:

  • Reclaim Your Domain - The project site for all of this work, where you will find links to all information, calendar of events, and link to individual reclaim sites.
  • Kin Lane Reclaim Your Domain - My personal #Reclaim website where I am working on reclaiming my domain, while I also work to define the overall process.
  • Terms of Service Didn’t Read - A website for establishing plain english, yet machine readable discussions around the terms of service for the platforms we depend on.

During the weekend I was introduced to some other very import tools, which are essential to the #Reclaim process:

  • Known - A publishing platform that empowers you to manage your online self, available as an open source tool, or available as cloud service. I’m still setting up my own instance of Known, and will have more information after I get setup, and play with more.
  • IndieAuth - IndieAuth is a way to use your own domain name to sign in to websites, which is a missing piece on the identity front for #Reclaim. Same as Known, I’ll have more information on this after I play with.

I also got some quality time, getting more up to speed on two other of tools that will be be important to #Reclaim:

  • Smallest Federated Wiki - A simple, and powerful wiki implementation that uses the latest technology to collaborate around content, with a very interesting approach to using JSON, and plugins to significantly evolve the wiki experience.
  • Reclaim Hosting - Reclaim Hosting provides educators and institutions with an easy way to offer their students domains and web hosting that they own and control.

Over the course of two days I was able to share what I was working on, and learn about @withknown, and what @holden is up to with Smallest Federated Wiki, and get a closer look at what @timmmmyboy, @jimgroom, @mburtis are up to with Reclaim Hosting, while also explore some other areas I think are vital to #Reclaim moving forward. #WIN There were also some other really important takeways for me.

POSSEE
POSSE is an acronym for Publish (on your) Own Site, Syndicate Elsewhere. For the first time I saw an application that delivers on this concept, while also holding potential for the whole reclaim your domain philosophy--Known. I am excited to fire up my own instance of Known and see how I can actually use to manage my digital self, and add this slick piece of software to the #Reclaim stack of tools that everyone else can put to use.

The Importance of API 101
During the second day of the hackathon, I was asked to give an API 101 talk. I headed to the other end of the meeting area, allowing anyone who wasn’t interested in listening to me, to continue hacking on their own. I was surprised to see that everyone joined me to learn about APIs--well everyone except @audreywatters (API blah blah blah). I felt like everyone had a general sense of what an API was, but only a handful of folks possessed intimate knowledge. I used the separation of websites being for humans, and APIs being for other applications and systems as the basis for my talk—showing how websites return HTML for displaying to humans, and APIs return JSON meant to be used by applications. Later in the day I also wrote a little PHP script which made a call to an API (well JSON file), then displayed bulleted list of results, to help show how APIs can drive website content. Once again I am reminded of the importance of API 101 demos, and how I need to focus more in this area.

The Importance of Github 101
One of the topics we covered was the basics of using Github. I walked through the basic concepts that surround Github usage like repositories, forks, commits, pull requests—demonstrating how multiple users can collaborate, and work together on not just code, but content. I demonstrated how issues, and milestones can also be used to manage conversation around the project, in addition to work with repository files. Lastly, I walked through Github Pages, and how using a separate branch, you can publish HTML, CSS, JavaScript and JSON for projects, turning Github into not just a code and content management platform, but also a publishing endpoint.

APIs.json Playing a Role in #Reclaim
After hearing @timmmmyboy talk about how Reclaim Hosting aggregates domain users within each university, I brought up APIs.json and how I’m using this as a index for APIs in both the public and private sector. While it may not be something that is in the immediate roadmap for #Reclaim, I think APIs.json will play a significant role in #Reclaim process down the road, and is worth noting here.

Containerization Movement
One pattern I saw across the Reclaim Hosting and Domain of One’s Own work from @timmmmyboy@jimgroom, and @mburtis, is that they are mimicking what is currently happening in the @docker dominated containerization movement we are seeing from tech leaders like Amazon, Google, Microsoft, and Red Hat. Then only difference is Reclaim Hosting is doing it as apps that can be deployed across a known set of domains, spanning physical servers, within a particular institution. Containers offer portability for the #Reclaim lifecycle, for when students leave institutions, as well as for the wider public space, when people are looking to #Reclaim their digital self.

Importance of APIs in Higher Ed
APIs are central to everything about #Reclaim. It is how users will take control over their digital self, driving solutions like Known. With #Reclaim being born, and propagated via universities, the API stakes are raised across higher education. Universities need to adopt an API way of life, to drive all aspects of campus operations, but also to expose all students to the concept of APIs, making part of the university experience--teaching students to #Reclaim their own course schedule, student data, and other portfolio, and other aspects of the campus experience. Universities are ground zero, when it comes to exposing the next generation of corporate employees, government workers, and #Reclaim informed citizens--we have a lot of work to do to get insitutions up to speed.

Evolving the Hackathon Format
The Reclaim Your Domain LA Hackathon has moved forward the hackathon definition for me. There were no applications built over the weekend, and there were no prizes given away to winners, but there was significant movement that will live beyond just this single event—something that the previous definition of hackathon didn’t possess for me. Fifteen of us came together Friday night for food and drink at @amichaelberman house. Saturday morning we came together at Pepperdine and spent the day working through ideas and tool demonstrations, which included a lot of open discussion. Saturday night we came together at our house in Hermosa Beach, where we drank, continued conversations from the day, and Jazzercised on the roof until wee hours of the morning. Then Sunday we came together for breakfast, and went back to work at Pepperdine for the rest of the day. Once done, some folks headed to airport, and the rest of headed back to Hermosa Beach for dinner, more drinks, and conversation until late in the evening.

Over the two days, there was plenty of hacking, setting up Known and Smallest Federated Wiki, as part of Reclaim Your Domain. Most attendees got to work on their #Reclaim definitions, and POSSEE workflow using Known, and learned how to generate API keys, commit to Github, and other essential #Reclaim tasks. At many other hackathons I’ve been to, there were tangible projects that came out of the event, but were always abandoned after the short weekend. #Reclaim didn’t produce any single startup or application, but deployed and evolved on top of existing work, and processes, that will continue long after this single event, and will continue to build momentum with each event we do--capturing much more of the exhaust from a weekend hackathon.

The Time Is Right For #Reclaim
I feel #Reclaim is in motion, and there is no stopping it now. Each of the three events I’ve done, have been extremely fruitful, and the ideas, conversation, and code just flows. I see signs across the Internet, that some people are beginning to care more about about their digital self, in light of exploitation from government, and technology companies. It is not an accident that this movement is coming out of higher education institutions, and will continue to spread, and build momentum at universities. The time is right for #Reclaim, I can feel it.



from http://ift.tt/1A3RPEO

Wednesday, July 16, 2014

Driving The #Reclaim Process Using Terms Of Service Didn't Read

I’m thinking through some of the next steps for my Reclaim Your Domain process, in preparation for a hackathon we have going on this weekend. Based upon defining, and executing on my own #Reclaim process, I want to come up with a v1 proposal, for one possible vision for the larger #Reclaim lifecycle.

My vision for Reclaim Your Domain, is to not create yet another system, we have to become slave to. I want #Reclaim to be an open framework, that helps guide people through reclaiming their own domain, and encourages them to manage and improve their digital identity through the process. With this in mind I want to make sure I don’t re-invent the wheel, and build off of any existing work that I can.

One of the catalysts behind Reclaim Your Domain for me, was watching the Terms of Service Didn’t Read project, aimed at building understanding of the terms of service, for the online services we depend on. Since the terms of service of my platforms, is the driving force behind #Reclaim decisions that I make, I figured that we should make sure and incorporate TOS Didn’t Read into our #Reclaim efforts--why re-invent the wheel!

There is a lot going on at TOS Didn’t Read, but basically they have come up with a tracking, and rating system for making sense of the very legalese TOS of services that we depend on. They have three machine readable elements, that make of their tracking and rating system:

  • Services (specification) (listing) - Online service providers that we all depend on.
  • Topics (specification) (listing) - A list of topics being applied and discussed at the TOS level.
  • Points (specification) (listing) - A list of specific points, within various topics, that applied directly to services TOS.

This gives me the valuable data I need for each persons reclaim process, and insight into their actual terms of service, allowing me to educate myself, as well as anyone else who embarks on reclaiming their domain. I can drive the list of services, driven by TOS Didn’t Read, as well as educate users on topics, and the points that are included. As part of #Reclaim, we will have our own services, topics, and points that we may, or may not, commit back to the master TOS Didn’t Read project—allowing us to build upon, augment, and contribute back to this very important work, already in progress.

Next, as part of the #Reclaim process, I will add in two other elements:

  • Lifebits - Definitions of specific type of content and data that we manage as part of our digital life.
  • Actions - Actions that we take against reclaiming our lifebits, from the services we depend on.

I will use a similar, machine readable, Github driven format like what the TOS Didn’t Read group has used. I don’t even have a v1 draft of what the specification for life bits and actions will look like, I just know I want to track on my lifebits, as well as the services they are associated with these lifebits, and ultimately be able to take actions against the services--one time, or on regular basis.

I want to add to the number of TOS Didn't Read points available, but provided in a #Reclaim context. I think that once we beta test a group of individuals on the #Reclaim process, we will produce some pretty interesting topics, and points that will matter the most to the average Internet user. With each #Reclaim, the overall #Reclaim process will get better, while also contributing to a wider understanding of how leading technology providers are crafting their terms of service (TOS), and ultimately working with or against the #Reclaim process.

These are just my preliminary thoughts on this. I’ve forked the TOS Didn’t Read repository into the #Reclaim Github organization. Next I will make the services  available in machine readable JSON, and driven using TOS Didn’t Read services within my personal #Reclaim project. Then I will be able to display existing topics, points and even the TOS Didn’t Read ranking for each of the services I depend on. Not sure what is after that, we’ll tackle this first, then I feel I’ll have a new understanding to move forward from.



from http://ift.tt/1mVZcbD

Tuesday, July 15, 2014

Considering Amazon Web Service's Continued Push Into Mobile

I am still processing the recent news of Amazon Mobile Services. Over time Amazon is continuing to push into the BaaS world, to compliment their existing IaaS, and PaaS ecosystem. Amazon is a cloud pioneer, and kind of has the first mover, 1000lb gorilla advantage, when it comes to delivering cloud services. 

At this moment, I just thought their choice of core services was extremely interesting, and telling of what is important to mobile developers:

  • Authenticate Users - Manage users and identity providers.
  • Authorize Access - Securely access cloud resources.
  • Synchronize Data - Sync user preferences across devices.
  • Analyze User Behavior - Track active users and engagement.
  • Manage Media - Store and share user-generated photos and other media items.
  • Deliver Media - Automatically detect mobile devices and deliver content quickly on a global basis.
  • Send Push Notifications - Keep users active by sending messages reliably.
  • Store Shared Data - Store and query NoSQL data across users and devices.
  • Stream Real-Time Data - Collect real-time clickstream logs and react quickly.

You really see the core stack for mobile app development represented in that bulleted list of backend services for mobile. I'm still looking through what Amazon is delivering, as part of my larger BaaS research, but I think this list, and what they chose to emphasize, is very relevant to the current state of the mobile space. 

It is kind of like steering a large ocean vessel, it takes some time to change course, but now that Amazon has set its sights on mobile, I think we will see multiple waves of mobile solutions coming from AWS. 

I'll keep an eye on what they are up to and see how it compares to other leading mobile backend solutions. Seems like AWS is kind of becoming a bellweather for what is becoming mainstream, when cit omes to delivering infrastructure for mobile and tablet app developers.



from http://ift.tt/1nGMNJi

Monday, July 7, 2014

Nothing Happens Until I Write It Down

Now that I've reached the age of 40, I've learned a lot about myself, how my mind works, what I remember and what I don’t. I've also learned a lot about how I'm perceived and remembered by the world around me, both physically and virtually.

I first started a blog in 2006, and it took me about 4 years until I found a voice that mattered. It wasn't just a voice that mattered to readers, it was a voice that mattered to me. If I don't blog about something I found, I don't remember it, resulting in it never happening.

I don't have any examples of this happening, because anything that fell through the cracks, never happened. This is why my public and private domains are so critical, they provide me with my vital recall of facts and information, but also becomes a record of history—defining what has happened.

How much of history do we retain from written accounts? If we don’t write history down, it doesn't happen—now we generate reality partially through online publishing.



from http://ift.tt/1n1pXa6

Thursday, July 3, 2014

Intellectual Bandwidth

I got all high on making up new phrases yesterday, with Intellectual Exhaust, and was going to write another on Intellectual Bandwidth (IB), based upon the tweet from Julie Ann Horvath (@nrrrdcore):

Then I Googled Intellectual Bandwidth and came up with this definition:

An organization's Intellectual Bandwidth (IB) is its capacity to transform External Domain Knowledge (EDK) into Intellectual Capital (IC), and to convert IC into Applied Knowledge (AK), from which a task team can create value.

That is a lot of bullshit acronyms, and with that, my bullshit phrase creation spree comes to an end.



from http://ift.tt/1j0To1e

Wednesday, July 2, 2014

Intellectual Exhaust (IE)

As I generate shitloads of content playing the API Evangelist on the Internets, I struggle with certain words, as I write each day—one of these words is intellectual property (IP), which Wikipedia defines as:

Intellectual property (IP) rights are the legally recognized exclusive rights to creations of the mind.[1] Under intellectual property law, owners are granted certain exclusive rights to a variety of intangible assets, such as musical, literary, and artistic works; discoveries and inventions; and words, phrases, symbols, and designs. Common types of intellectual property rights include copyright, trademarks, patents, industrial design rights, trade dress, and in some jurisdictions trade secrets.

I don’t like the phrase intellectual property, specifically because it includes “property”. Nothing that comes from my intellect is property. Nothing. It isn’t something you can own or control. Sorry, what gets generated from my intellect, wants to be free, not owned or controlled—it is just the way it is. I cannot be creative, generate my ideas and projects, if I know the output or results will be locked up.

With this in mind I want to craft a new expression to describe the result of my intellectual output, I’m going to call intellectual exhaust (IE). I like the term exhaust, which has numerous definitions, and reflects what can be emitted from my daily thoughts. You are welcome to collect, observe, remix, learn from, or get high off of the exhaust off my daily work—go right ahead, this is one of the many reasons I work so hard each day. You my loyal reader. One. Single.

In my opinion, you can even make money off my intellectual exhaust, however, no matter what you do, make sure you attribute back, letting people know where your ideas come from. And if you do make some money from it, maybe you can kick some of that back, supporting the things that fuel my intellectual exhaust: sleep, food, water, beer, and interactions with other smart people around the globe. ;-)

P.S. There are other things that fuel my intellectual exhaust, but my lawyer and my girlfriend say I can’t include some of them.

P.S.S. My girlfriend is not my lawyer.



from http://ift.tt/1qnbrPR

Smallest Federated Wiki Blueprint Evangelism

I’m playing with a new tool that was brought to my attention called, the Small Federated Wiki (SFW), a dead-simple, yet powerfully extensible, and federated online platform solution, developed by Mike Cauffield. In his latest post about Smallest Federated Wiki as a Universal JSON Canvas, Mike opens up with a story of how hard tech evangelism is, with an example from Xerox PARC:

Watching Alan Kay talk today about early Xerox PARC days was enjoyable, but also reminded me how much good ideas need advocating. As Kay pointed out repeatedly, explaining truly new ways of doing things is hard.

First, without Mike’s continue storytelling around his work, it wouldn’t continue to float up on my task list. His story about it being a “Universal JSON Canvas”, caught my attention, lit a spark that trumped the shitload of other work I should be doing this morning. Evangelism is hard. Evangelism is tedious. Evangelism requires constant storytelling, in real-time.

Second, for the Smallest Federated Wiki (SFW) to be successful, evangelism will have to be baked in. I will be playing with this concept more, to produce demonstrable versions, but SFW + Docker Containers, will allow for a new world of application collaboration. The Amazons, Googles, and Microsofts of the world are taking popular open source platform combinations like Wordpress, and Drupal, creating container definitions that include Linux, PHP, MySQL and other platform essentials, that can be easily deployed in any environment—now think about the potential of SWF + Docker.

Following this new container pattern, I can build out small informational wikis using SWF, add on plugins, either standard or custom, and create a specific implementation of SWF, which I can deploy as a container definition with underlying linux, node.js, and other required platform essentials. This will allow me to tailor specific SWF blueprints, that anyone else can deploy with the push of a button. Think courseware, research, curation, and other collaborative classroom application scenarios--I can establish a base SFW template, and let someone else run with the actual implementation.

Now, bringing this back home to evangelism--Mike doesn’t have to run around explaining to everyone what SFW does, well he should be, but not to EVERYONE. People who care about specific domains, can build SFW blueprints, utilize containers on Amazon, Google, Microsoft and other providers to deploy blueprints, and through evangelizing their own SFW implementations, will evangelism what SFW is capable of, to other practitioners--federated evangelism baked in too! ;-)

The federation of evangelism, will be how the Smallest Federated Wiki spreads like a virus.



from http://ift.tt/1qmTrF5

Making More Time To Play With The Smallest Federated Wiki

I'm always working to better understand the best of breed technology solutions available online today, and to me, this means, lightweight, machine readable apps that do one thing and do it well. One solution I’m looking at is called the Smallest Federated Wiki, from Mike Caulfield(@holden), which has been on my list for several weeks now, but one of his latest posts has floated it back onto my priority list.

To understand what the Smallest Federated Wiki (SFW) is, check out the video. I haven’t personally downloaded and installed yet, which is something I do with all solutions that I’m evaluating. SFW is Node.js, and available on Github, if you want to play with as well--I'm going to be installing on AWS if you need an AMI. This post is all about understanding SFW, and light the fire under my own use of SFW, and hopefully stimulating your interest.

Simple
Building off the simplicity of the Wiki, SFW borrows from the best features of Wiki, Github, and rolled together into simple, but ultimately powerful implementation that embraces the latest in technology from Node.js to HTML5. I know how hard it can be to achieve "simple", and while playing with SFW, I can tell a lot of work has gone into keeping things as fucking simple as possible. #win

Federated
I love me some Wikipedia and Github, but putting my valuable content, and hard work into someone else’s silo is proving to be a very bad idea. For all of my projects, I want to be able maximize collaboration, syndication and reach, without giving away ownership of my intellectual exhaust (IE) . SFW reflects this emotion, and allows me to browse other people’s work, fork, re-use, while also maintaining my own projects within my silo, and enable other people to fork, and re-use from my work as well--SFW is a sneak peak at how ALL modern applications SHOULD operate.

JSON Extensible
SFW has the look and feel of a new age wiki, allowing you two generate pages and pages of content, but the secret sauce underneath is JSON. Everything on SFW is JSON driven, allowing for unlimited extensibility. MIke's latest blog post on how SFW’s extensibility is unlimited, due to it's JSON driven architecture, is why I'm floating SFW back on my review list. My 60+ API Evangelist projects all start with basic page, and blog content, but then rely on JSON driven research for companies, building blocks, tools, services, and many other data points that I track on for the space—SFW reflects the JSON extensibility I’ve been embracing for the last couple years, but I'm doing this manually, SFW is by default.

Simplicity And Complexity
SFW achieves a simplicity, combined with the ability to extend the complexity in any way you choose. I can create a simple 3 page project site with it, or I could create a federated news application, allowing thousands of people to publish, curate, fork, remix, and collaborate around links—think Reddit, but federated. I envision SFW templates or blueprints, that allow me to quickly deploy a basic project blog, or CRM, news, research, and other more complex solutions. With new cloud deployment options like Docker emerging, I see a future where I can quickly deploy these federated blueprints, on the open web, on-premise, or anywhere I desire.

I have a lot of ideas that I want to contribute to the SFW roadmap, but I need to get more seat hours, playing with the code, before I can intelligently contribute. Once i get my base SFW setup, I will start brainstorming on the role APIs can play in the SFW plugin layer., and scenarios for rapidly building SFW blueprint containers.

P.S. While SFW has been on my Evernote todo list for several weeks, it was Mike's continued storytelling which bumped up the priority. Without the storytelling and evangelism, nothing happens--something Mike references in his post.



from http://ift.tt/1rjXMe0

Tuesday, July 1, 2014

Remembering My Friend Pat Price

Sometimes you meet people, and you automatically know that they are someone you will know for a very long time, with a sense that you’ve known them before, in many previous lives. This was the way I felt when I first met Patrick Price. He was polite, cordial, but quiet when I first met him, but after several conversations, he had a familiar energy to him, that put me at ease pretty quickly.

The first thing I learned about Pat, was that he had an obsessive work ethic. He didn’t just take pride in his work, he was obsessive about making sure things were done, and they were done right--no excuses. When looking back through photos of after work events, where the rest of us were already blowing off steam, Pat was very rarely present, most likely back on location, making sure everything was put away, ready for next day.

If you deserved it, Pat would have your back. If you did not, you wouldn’t. Pat is someone I would have on my side in a gunfight, no matter where in the world, or where in time. He would have stood tall, until the final moments. This is how I picture Pat leaving this world, in a standoff, in a remote part of town, protecting a group of his friends.

When you came to see Pat, he was always on the phone with someone, and you almost always had to wait 10-15 minutes before he had time for you. This was the way it worked, you couldn’t just walk into the office, and he’d have time for you. Pat had a long list of tasks, and people he was dealing with—you always had to accept your place in line, and make the most of it when you could.

When I got the news of his passing, I was overcome with concern that I hadn't stop by to see him, in the latest trip south from Oregon to Los Angeles. Then I remembered all the other amazing pit stops from the past, where I stopped and talked for 30 minutes, went for a drink, or had dinner. If you could wait 15 minutes to see him, he was always good for a meaningful conversation, that went deep, followed by a solid man-hug, before hitting the road again.

Pat was also a constant presence in the background of my digital self. While I cherished my memories of stopping in to say hello in person, I enjoyed his constant presence on every one of my Foursquare checkin around the globe, and Twitter interactions around random topics, places, pics, and experiences. Pat shared my love of food, drink, and good music, and took the opportunity to chime in, on every experience I shared on the Internetz.

I’m going to miss Pat. I will think about him regularly, throughout my life. He will never diminish in my memories, because I know I will see him again soon—for the same reasons, when I first met him, I knew he was my family.



from http://ift.tt/Vam3WI

Wednesday, June 18, 2014

Disrupting The Panel Format At API Craft SF

Last week I particpated in a panel at API Craft San Francisco with Uri Sarid(@usarid), Jakub Nesetril(@jakubnesetril), Tony Tam(@fehguy), moderated by Emmanuel Paraskakis(@manp), at the 3Scale office.

The panel started, and I was the last person in the row of panelists, and Emmanuel asked his first question, passing the microphone to Uri who was first in line, once Uri was done he handed the mic to Jakub, then to Tony, and lastly to me.

As Emmanuel asked his second question I saw the same thing happening. He handed the microphone to Uri, then Jakub, and Tony. Even though the questions were good, the tractor beam of a panel was taking hold, making it more of an assembly line, than a conversation.

I needed to break the energy, and as soon as I got the microphone in my hand I jumped up and made my way through the crowd, around the back, to where the beer was, and helped myself to a fresh Stone Arrogant Bastard (ohh the irony). I could have cut through the middle, but I wanted to circle the entire audience as I slowly gave my response to the question.

With beer in hand I slowly walked back up, making reference to various people in the audience, hoping by the time I once again joined the panel, the panel vibe had been broken, and the audience would be part of the conversation. It worked, and the audience began asking more questions, to which I would jump up and bring the mic to them--making them part of the panel discussion.

I don’t think the panel format is broken, I just think it lends itself to some really bad implementations. You can have a good moderator, and even good panelists, but if you don’t break the assembly line of the panel, and make it a conversation amongst not just the panelists, but also audience—the panel format will almost always fail.



from http://ift.tt/1lFFqMs

Monday, June 9, 2014

Exhaust From Crunching Open Data And Trying To Apply Page Rank To Spreadsheets

I stumbled across a very interesting post on pagerank for spreadsheets. The post is a summary of a talk, but provided an interesting look at trying to understand open data at scale. Something I've tried doing several times, including my Adopt A Federal Government Dataset work. Which reminds me of how horribly out of data it all is.

There is a shitload of data stored in Microsoft Excel, Google Spreadsheet and CSV files, and trying to understand where this data is, and what is contained in these little data stores is really hard. This post doesn’t provide the answers, but gives a very interesting look into what goes into trying to understand open data at scale.

The author acknowledges something I find fascinating, that “search for spreadsheet is hard”—damn straight. He plays with different ways for quantifying the data based upon number columns, rows, content, data size and even file formats.

This type of storytelling from the trenches is very important. Every time I work to download, crunch and make sense of, or quantify open data, I try to tell the story in real-time. This way much of the mental exhaust from the process is public, potentially saving someone else some time, or helping them see it through a different lens.

Imagine if someone made the Google, but just for public spreadsheets. Wish I had a clone!



from http://ift.tt/1uMOczI

Ken Burns: History of Computing

I’m enjoying Mike Amundsen’s keynote from API Strategy & Practice in Amsterdam again, Self-Replication, Strandbeest, and the Game of Life What von Neumann, Jansen, and Conway can teach us about scaling the API economy.

As I listen to Mike’s talk, and other talks like Bret Victor’s “The Future of Programming”, I’m reminded of how important knowing our own history is, and for some strange reason, in Silicon Valley this is we seem to excel at doing the opposite, and making a habit of forgetting our own history of computing.

The conversation around remembering the history of compute came up between Mike Amundsen and I, during the beer fueled discussion in the Taproom at Gluecon in Colorado, last May. As we were discussing the importance of the history of technology, the storytelling approach of Ken Burns came up, and Mike and I were both convinced that Ken Burns needs to do a documentary series on the history of computing.

There is something about the way that Ken Burns does a documentary that can really reach our hearts and minds, and Silicon Valley needs a neatly packaged, walkthrough of our computing history from say 1840 through 1970. I think we’ve heard enough stories about the PC era, Bill Gates and Steve Jobs, and what we need is a brush-up up on the hundreds of other personalities that gave us computing, and ultimately the Internet.

My mother gave me a unique perspective: that I can manifest anything. So I will make this Ken Burns: History of Computing series happen, but I need your help. I need you to submit the most important personalities and stories you know from the history of computing, that should be included in this documentary. To submit, just submit as issue on the Github repository for this site, or if you are feeling adventurous, you submit as Jekyll blog post for this site, and I'll accept your commit.

Keep any submission, focused, and about just a single person, technology or idea. Once we get enough submissions, we can start connecting the dots, weaving together any further narratives. My goal is to generate enough research for Mr. Burns to use when he takes over the creative process, and hopefully to generate enough buzz to get him to even notice that we exist. ;-)

It is my belief that we are at a critical junction where our physical worlds are colliding with this new virtual world, driven by technology. To better understand what is happening, I think we need to pause, and talk a walk through our recent history of compute technology, and learn more about how we got here--I couldn’t think of a better guide, than Ken Burns.

Thanks for entertaining my crazy delusions, and helping me assemble the cast of characters, that Ken Burns can use when crafting The History of Compute. Hopefully we can learn a lot along the way, as well as use the final story to help bring everyone up to speed on this crazy virtual world we’ve created for ourselves.

Photo Credit: Hagley Museum and Library and UNISYS



from http://ift.tt/1pXa6QR

Friday, June 6, 2014

The Black, White And Gray of Web Scraping

There are many reasons for wanting to scrape data or content from a public website. I think these reasons can be easily represented as different shades of gray, the darker the grey being considered less legal, and the lighter the grey more legal you could consider it. You with me?

An example of darker grey would be scraping classified ad listings from craigslist for use on your own site. Where an example of lighter grey could be pulling a listing of veterans hospitals from the Department of Veterans Affairs website for use in a mobile app that supports veterans. One is corporate owned data, and the other is public data. The motives for wanting either set of data would potentially be radically different, and the restrictions on each set of data would be different as well.

Many opponents of scraping don't see the shades of grey, they just see people taking data and content that isn't theirs. Proponents of scraping will have an array of opinions ranging from, if it is on the web, it should be available to everyone, to people who only would scrape openly licensed or public data, and stay away from anything proprietary.

Scraping of data is never a black and white issue. I’m not blindly supporting scraping in any situation, but I'm a proponent of sensible approaches to harvesting of valuable information, development of open source tools, as well as services that assist users in scraping.



from http://ift.tt/1i95vDj

Github Commit Storytelling: Now or Later

When you are making Github commits you have to provide a story that explains the changes you are committing to a repository. Many of us just post 'blah blah’, ‘what I said last time", or any other garbage that just gets us through the moment. You know you’ve all done it at some point.

This is a test, of your ability to tell a story, for the future, to be heard by your future self, or someone else entirely. While in the moment it may seem redundant and worthless, but when you think of the future and how this will look when it is being read by a future employer, or someone that is trying to interpret your work, things will be much different. #nopressure

In the world of Github, especially when your repositories are public, each commit is a test of your storytelling ability and how well you can explain this moment for future generations. How will you do on the test? I would say that I'm C grade, and this post is just a reminder for me.



from http://ift.tt/1i95wY0

Thursday, June 5, 2014

Beta Testing Linkrot.js On API Evangelist

I started beta testing a new JavaScript library, combined with API, that I’m calling linkrot.js. My goal is to address link rot across my blogs. There are two main reasons links are bad on my site, either I moved the page or resource, or a website or other resource has gone away.

To help address this problem, I wrote a simple JavaScript file that lives in the footer of my blog, and when the page loads, it spiders all the links on the page, combining them into a single list and then makes a call to the linkrot.js API.

All new links will get a URL shortener applied, as well as a screenshot taken of the page. Every night a script will run to check the HTTP status of each link used in my site—verifying the page exists, and is a valid link.

Every time link rot.js loads, it will spider the links available in the page, sync with linkrot.js API, and the API returns the corresponding shortened URL, or if a link shows a 404 status, the link will no longer link to page, it will popup the last screenshot of the page, identifying the page no longer exists.

Eventually I will be developing a dashboard, allowing me to manage the link rot across my websites, make suggestions on links I can fix, provides a visual screen capture of those I cannot, while also adding a new analytics layer by implementing shortened URLs.

Linkrot.js is just an internal tool I’m developing in private beta. Once I get up and running, Audrey will beta test, and we’ll see where it goes from there. Who knows!



from http://ift.tt/1mgW2ek