Tuesday, July 22, 2014

Reclaim Your Domain LA Hackathon Wrap-up

I spent the weekend hacking away with a small group of very smart folks, at the Reclaim Your Domain Hackathon in Los Angeles. Fifteen of us gathered at Pepperdine University in west LA, looking to move forward the discussion around what we call “Reclaim Your Domain”. This conversation began last year, at the #ReclaimOpen Hackathon, continued earlier this year at Emory University, and we were looking to keep the momentum building this weekend at Pepperdine.

Here is a breakdown of who was present this weekend:

  • Jim Groom - University of Mary Washington (@jimgroom)
  • Michael Caulfield - WSU Vancouver - http://hapgood.us/ - (@holden)
  • Michael Berman - California State University Channel Islands (@amichaelberman)
  • Chris Mattia - California State University Channel Islands (@cmmattia)
  • Brian Lamb - Thompson Rivers University (@brlamb)
  • Timothy Owens - University of Mary Washington (@timmmmyboy)
  • Mikhail Gershovich - Vocat (@mgershovich)
  • Amy Collier - Stanford (@amcollier)
  • Rolin Moe - Pepperdine (@RMoeJo)
  • Adam Croom - University of Oklahoma (@acroom)
  • Mark C. Morvant - University of Oklahoma (@MarkMorvant)
  • Ben Werdmuller — Withknown (@benwerd)
  • Erin Richey — Withknown (@erinjo)
  • Kin Lane — API Evangelist (@kinlane)
  • Audrey Watters — Hack Education (@audreywatters)

If you are unsure of what #Reclaim is all about, join the club, we are trying to define it as well. You can head over to Reclaim Your Domain, or you can also look at Reclaim Hosting, and the original Domain Of Ones Own at University of Mary Washington which has provided much of the initial spark behind the #Reclaim movement, for more information. Ultimately, #Reclaim will always be a very personal experience for each individual, but Reclaim Your Domain is primarily about: 

Educating, and empowering individual to define, reclaim, and manage their digital self

The primary thing I got out of this weekend, beyond the opportunity to hang out with such a savvy group of educators, was the opportunity to talk through my own personal #Reclaim process, as well as my vision of how we can use Terms of Service Didn’t Read as a framework for the #Reclaim process. The weekend re-enforced the importance of APIs in higher education, in not just my own API Evangelism work, but contributing to the overall health of the Internet (which I will talk about in a separate post).

To recap what I said above, there are three domains, that you will need to learn about my current #Reclaim work:

  • Reclaim Your Domain - The project site for all of this work, where you will find links to all information, calendar of events, and link to individual reclaim sites.
  • Kin Lane Reclaim Your Domain - My personal #Reclaim website where I am working on reclaiming my domain, while I also work to define the overall process.
  • Terms of Service Didn’t Read - A website for establishing plain english, yet machine readable discussions around the terms of service for the platforms we depend on.

During the weekend I was introduced to some other very import tools, which are essential to the #Reclaim process:

  • Known - A publishing platform that empowers you to manage your online self, available as an open source tool, or available as cloud service. I’m still setting up my own instance of Known, and will have more information after I get setup, and play with more.
  • IndieAuth - IndieAuth is a way to use your own domain name to sign in to websites, which is a missing piece on the identity front for #Reclaim. Same as Known, I’ll have more information on this after I play with.

I also got some quality time, getting more up to speed on two other of tools that will be be important to #Reclaim:

  • Smallest Federated Wiki - A simple, and powerful wiki implementation that uses the latest technology to collaborate around content, with a very interesting approach to using JSON, and plugins to significantly evolve the wiki experience.
  • Reclaim Hosting - Reclaim Hosting provides educators and institutions with an easy way to offer their students domains and web hosting that they own and control.

Over the course of two days I was able to share what I was working on, and learn about @withknown, and what @holden is up to with Smallest Federated Wiki, and get a closer look at what @timmmmyboy, @jimgroom, @mburtis are up to with Reclaim Hosting, while also explore some other areas I think are vital to #Reclaim moving forward. #WIN There were also some other really important takeways for me.

POSSE is an acronym for Publish (on your) Own Site, Syndicate Elsewhere. For the first time I saw an application that delivers on this concept, while also holding potential for the whole reclaim your domain philosophy--Known. I am excited to fire up my own instance of Known and see how I can actually use to manage my digital self, and add this slick piece of software to the #Reclaim stack of tools that everyone else can put to use.

The Importance of API 101
During the second day of the hackathon, I was asked to give an API 101 talk. I headed to the other end of the meeting area, allowing anyone who wasn’t interested in listening to me, to continue hacking on their own. I was surprised to see that everyone joined me to learn about APIs--well everyone except @audreywatters (API blah blah blah). I felt like everyone had a general sense of what an API was, but only a handful of folks possessed intimate knowledge. I used the separation of websites being for humans, and APIs being for other applications and systems as the basis for my talk—showing how websites return HTML for displaying to humans, and APIs return JSON meant to be used by applications. Later in the day I also wrote a little PHP script which made a call to an API (well JSON file), then displayed bulleted list of results, to help show how APIs can drive website content. Once again I am reminded of the importance of API 101 demos, and how I need to focus more in this area.

The Importance of Github 101
One of the topics we covered was the basics of using Github. I walked through the basic concepts that surround Github usage like repositories, forks, commits, pull requests—demonstrating how multiple users can collaborate, and work together on not just code, but content. I demonstrated how issues, and milestones can also be used to manage conversation around the project, in addition to work with repository files. Lastly, I walked through Github Pages, and how using a separate branch, you can publish HTML, CSS, JavaScript and JSON for projects, turning Github into not just a code and content management platform, but also a publishing endpoint.

APIs.json Playing a Role in #Reclaim
After hearing @timmmmyboy talk about how Reclaim Hosting aggregates domain users within each university, I brought up APIs.json and how I’m using this as a index for APIs in both the public and private sector. While it may not be something that is in the immediate roadmap for #Reclaim, I think APIs.json will play a significant role in #Reclaim process down the road, and is worth noting here.

Containerization Movement
One pattern I saw across the Reclaim Hosting and Domain of One’s Own work from @timmmmyboy@jimgroom, and @mburtis, is that they are mimicking what is currently happening in the @docker dominated containerization movement we are seeing from tech leaders like Amazon, Google, Microsoft, and Red Hat. Then only difference is Reclaim Hosting is doing it as apps that can be deployed across a known set of domains, spanning physical servers, within a particular institution. Containers offer portability for the #Reclaim lifecycle, for when students leave institutions, as well as for the wider public space, when people are looking to #Reclaim their digital self.

Importance of APIs in Higher Ed
APIs are central to everything about #Reclaim. It is how users will take control over their digital self, driving solutions like Known. With #Reclaim being born, and propagated via universities, the API stakes are raised across higher education. Universities need to adopt an API way of life, to drive all aspects of campus operations, but also to expose all students to the concept of APIs, making part of the university experience--teaching students to #Reclaim their own course schedule, student data, and other portfolio, and other aspects of the campus experience. Universities are ground zero, when it comes to exposing the next generation of corporate employees, government workers, and #Reclaim informed citizens--we have a lot of work to do to get insitutions up to speed.

Evolving the Hackathon Format
The Reclaim Your Domain LA Hackathon has moved forward the hackathon definition for me. There were no applications built over the weekend, and there were no prizes given away to winners, but there was significant movement that will live beyond just this single event—something that the previous definition of hackathon didn’t possess for me. Fifteen of us came together Friday night for food and drink at @amichaelberman house. Saturday morning we came together at Pepperdine and spent the day working through ideas and tool demonstrations, which included a lot of open discussion. Saturday night we came together at our house in Hermosa Beach, where we drank, continued conversations from the day, and Jazzercised on the roof until wee hours of the morning. Then Sunday we came together for breakfast, and went back to work at Pepperdine for the rest of the day. Once done, some folks headed to airport, and the rest of headed back to Hermosa Beach for dinner, more drinks, and conversation until late in the evening.

Over the two days, there was plenty of hacking, setting up Known and Smallest Federated Wiki, as part of Reclaim Your Domain. Most attendees got to work on their #Reclaim definitions, and POSSEE workflow using Known, and learned how to generate API keys, commit to Github, and other essential #Reclaim tasks. At many other hackathons I’ve been to, there were tangible projects that came out of the event, but were always abandoned after the short weekend. #Reclaim didn’t produce any single startup or application, but deployed and evolved on top of existing work, and processes, that will continue long after this single event, and will continue to build momentum with each event we do--capturing much more of the exhaust from a weekend hackathon.

The Time Is Right For #Reclaim
I feel #Reclaim is in motion, and there is no stopping it now. Each of the three events I’ve done, have been extremely fruitful, and the ideas, conversation, and code just flows. I see signs across the Internet, that some people are beginning to care more about about their digital self, in light of exploitation from government, and technology companies. It is not an accident that this movement is coming out of higher education institutions, and will continue to spread, and build momentum at universities. The time is right for #Reclaim, I can feel it.

from http://ift.tt/1A3RPEO

Wednesday, July 16, 2014

Driving The #Reclaim Process Using Terms Of Service Didn't Read

I’m thinking through some of the next steps for my Reclaim Your Domain process, in preparation for a hackathon we have going on this weekend. Based upon defining, and executing on my own #Reclaim process, I want to come up with a v1 proposal, for one possible vision for the larger #Reclaim lifecycle.

My vision for Reclaim Your Domain, is to not create yet another system, we have to become slave to. I want #Reclaim to be an open framework, that helps guide people through reclaiming their own domain, and encourages them to manage and improve their digital identity through the process. With this in mind I want to make sure I don’t re-invent the wheel, and build off of any existing work that I can.

One of the catalysts behind Reclaim Your Domain for me, was watching the Terms of Service Didn’t Read project, aimed at building understanding of the terms of service, for the online services we depend on. Since the terms of service of my platforms, is the driving force behind #Reclaim decisions that I make, I figured that we should make sure and incorporate TOS Didn’t Read into our #Reclaim efforts--why re-invent the wheel!

There is a lot going on at TOS Didn’t Read, but basically they have come up with a tracking, and rating system for making sense of the very legalese TOS of services that we depend on. They have three machine readable elements, that make of their tracking and rating system:

  • Services (specification) (listing) - Online service providers that we all depend on.
  • Topics (specification) (listing) - A list of topics being applied and discussed at the TOS level.
  • Points (specification) (listing) - A list of specific points, within various topics, that applied directly to services TOS.

This gives me the valuable data I need for each persons reclaim process, and insight into their actual terms of service, allowing me to educate myself, as well as anyone else who embarks on reclaiming their domain. I can drive the list of services, driven by TOS Didn’t Read, as well as educate users on topics, and the points that are included. As part of #Reclaim, we will have our own services, topics, and points that we may, or may not, commit back to the master TOS Didn’t Read project—allowing us to build upon, augment, and contribute back to this very important work, already in progress.

Next, as part of the #Reclaim process, I will add in two other elements:

  • Lifebits - Definitions of specific type of content and data that we manage as part of our digital life.
  • Actions - Actions that we take against reclaiming our lifebits, from the services we depend on.

I will use a similar, machine readable, Github driven format like what the TOS Didn’t Read group has used. I don’t even have a v1 draft of what the specification for life bits and actions will look like, I just know I want to track on my lifebits, as well as the services they are associated with these lifebits, and ultimately be able to take actions against the services--one time, or on regular basis.

I want to add to the number of TOS Didn't Read points available, but provided in a #Reclaim context. I think that once we beta test a group of individuals on the #Reclaim process, we will produce some pretty interesting topics, and points that will matter the most to the average Internet user. With each #Reclaim, the overall #Reclaim process will get better, while also contributing to a wider understanding of how leading technology providers are crafting their terms of service (TOS), and ultimately working with or against the #Reclaim process.

These are just my preliminary thoughts on this. I’ve forked the TOS Didn’t Read repository into the #Reclaim Github organization. Next I will make the services  available in machine readable JSON, and driven using TOS Didn’t Read services within my personal #Reclaim project. Then I will be able to display existing topics, points and even the TOS Didn’t Read ranking for each of the services I depend on. Not sure what is after that, we’ll tackle this first, then I feel I’ll have a new understanding to move forward from.

from http://ift.tt/1mVZcbD

Tuesday, July 15, 2014

Considering Amazon Web Service's Continued Push Into Mobile

I am still processing the recent news of Amazon Mobile Services. Over time Amazon is continuing to push into the BaaS world, to compliment their existing IaaS, and PaaS ecosystem. Amazon is a cloud pioneer, and kind of has the first mover, 1000lb gorilla advantage, when it comes to delivering cloud services. 

At this moment, I just thought their choice of core services was extremely interesting, and telling of what is important to mobile developers:

  • Authenticate Users - Manage users and identity providers.
  • Authorize Access - Securely access cloud resources.
  • Synchronize Data - Sync user preferences across devices.
  • Analyze User Behavior - Track active users and engagement.
  • Manage Media - Store and share user-generated photos and other media items.
  • Deliver Media - Automatically detect mobile devices and deliver content quickly on a global basis.
  • Send Push Notifications - Keep users active by sending messages reliably.
  • Store Shared Data - Store and query NoSQL data across users and devices.
  • Stream Real-Time Data - Collect real-time clickstream logs and react quickly.

You really see the core stack for mobile app development represented in that bulleted list of backend services for mobile. I'm still looking through what Amazon is delivering, as part of my larger BaaS research, but I think this list, and what they chose to emphasize, is very relevant to the current state of the mobile space. 

It is kind of like steering a large ocean vessel, it takes some time to change course, but now that Amazon has set its sights on mobile, I think we will see multiple waves of mobile solutions coming from AWS. 

I'll keep an eye on what they are up to and see how it compares to other leading mobile backend solutions. Seems like AWS is kind of becoming a bellweather for what is becoming mainstream, when cit omes to delivering infrastructure for mobile and tablet app developers.

from http://ift.tt/1nGMNJi

Monday, July 7, 2014

Nothing Happens Until I Write It Down

Now that I've reached the age of 40, I've learned a lot about myself, how my mind works, what I remember and what I don’t. I've also learned a lot about how I'm perceived and remembered by the world around me, both physically and virtually.

I first started a blog in 2006, and it took me about 4 years until I found a voice that mattered. It wasn't just a voice that mattered to readers, it was a voice that mattered to me. If I don't blog about something I found, I don't remember it, resulting in it never happening.

I don't have any examples of this happening, because anything that fell through the cracks, never happened. This is why my public and private domains are so critical, they provide me with my vital recall of facts and information, but also becomes a record of history—defining what has happened.

How much of history do we retain from written accounts? If we don’t write history down, it doesn't happen—now we generate reality partially through online publishing.

from http://ift.tt/1n1pXa6

Thursday, July 3, 2014

Intellectual Bandwidth

I got all high on making up new phrases yesterday, with Intellectual Exhaust, and was going to write another on Intellectual Bandwidth (IB), based upon the tweet from Julie Ann Horvath (@nrrrdcore):

Then I Googled Intellectual Bandwidth and came up with this definition:

An organization's Intellectual Bandwidth (IB) is its capacity to transform External Domain Knowledge (EDK) into Intellectual Capital (IC), and to convert IC into Applied Knowledge (AK), from which a task team can create value.

That is a lot of bullshit acronyms, and with that, my bullshit phrase creation spree comes to an end.

from http://ift.tt/1j0To1e

Wednesday, July 2, 2014

Intellectual Exhaust (IE)

As I generate shitloads of content playing the API Evangelist on the Internets, I struggle with certain words, as I write each day—one of these words is intellectual property (IP), which Wikipedia defines as:

Intellectual property (IP) rights are the legally recognized exclusive rights to creations of the mind.[1] Under intellectual property law, owners are granted certain exclusive rights to a variety of intangible assets, such as musical, literary, and artistic works; discoveries and inventions; and words, phrases, symbols, and designs. Common types of intellectual property rights include copyright, trademarks, patents, industrial design rights, trade dress, and in some jurisdictions trade secrets.

I don’t like the phrase intellectual property, specifically because it includes “property”. Nothing that comes from my intellect is property. Nothing. It isn’t something you can own or control. Sorry, what gets generated from my intellect, wants to be free, not owned or controlled—it is just the way it is. I cannot be creative, generate my ideas and projects, if I know the output or results will be locked up.

With this in mind I want to craft a new expression to describe the result of my intellectual output, I’m going to call intellectual exhaust (IE). I like the term exhaust, which has numerous definitions, and reflects what can be emitted from my daily thoughts. You are welcome to collect, observe, remix, learn from, or get high off of the exhaust off my daily work—go right ahead, this is one of the many reasons I work so hard each day. You my loyal reader. One. Single.

In my opinion, you can even make money off my intellectual exhaust, however, no matter what you do, make sure you attribute back, letting people know where your ideas come from. And if you do make some money from it, maybe you can kick some of that back, supporting the things that fuel my intellectual exhaust: sleep, food, water, beer, and interactions with other smart people around the globe. ;-)

P.S. There are other things that fuel my intellectual exhaust, but my lawyer and my girlfriend say I can’t include some of them.

P.S.S. My girlfriend is not my lawyer.

from http://ift.tt/1qnbrPR

Smallest Federated Wiki Blueprint Evangelism

I’m playing with a new tool that was brought to my attention called, the Small Federated Wiki (SFW), a dead-simple, yet powerfully extensible, and federated online platform solution, developed by Mike Cauffield. In his latest post about Smallest Federated Wiki as a Universal JSON Canvas, Mike opens up with a story of how hard tech evangelism is, with an example from Xerox PARC:

Watching Alan Kay talk today about early Xerox PARC days was enjoyable, but also reminded me how much good ideas need advocating. As Kay pointed out repeatedly, explaining truly new ways of doing things is hard.

First, without Mike’s continue storytelling around his work, it wouldn’t continue to float up on my task list. His story about it being a “Universal JSON Canvas”, caught my attention, lit a spark that trumped the shitload of other work I should be doing this morning. Evangelism is hard. Evangelism is tedious. Evangelism requires constant storytelling, in real-time.

Second, for the Smallest Federated Wiki (SFW) to be successful, evangelism will have to be baked in. I will be playing with this concept more, to produce demonstrable versions, but SFW + Docker Containers, will allow for a new world of application collaboration. The Amazons, Googles, and Microsofts of the world are taking popular open source platform combinations like Wordpress, and Drupal, creating container definitions that include Linux, PHP, MySQL and other platform essentials, that can be easily deployed in any environment—now think about the potential of SWF + Docker.

Following this new container pattern, I can build out small informational wikis using SWF, add on plugins, either standard or custom, and create a specific implementation of SWF, which I can deploy as a container definition with underlying linux, node.js, and other required platform essentials. This will allow me to tailor specific SWF blueprints, that anyone else can deploy with the push of a button. Think courseware, research, curation, and other collaborative classroom application scenarios--I can establish a base SFW template, and let someone else run with the actual implementation.

Now, bringing this back home to evangelism--Mike doesn’t have to run around explaining to everyone what SFW does, well he should be, but not to EVERYONE. People who care about specific domains, can build SFW blueprints, utilize containers on Amazon, Google, Microsoft and other providers to deploy blueprints, and through evangelizing their own SFW implementations, will evangelism what SFW is capable of, to other practitioners--federated evangelism baked in too! ;-)

The federation of evangelism, will be how the Smallest Federated Wiki spreads like a virus.

from http://ift.tt/1qmTrF5

Making More Time To Play With The Smallest Federated Wiki

I'm always working to better understand the best of breed technology solutions available online today, and to me, this means, lightweight, machine readable apps that do one thing and do it well. One solution I’m looking at is called the Smallest Federated Wiki, from Mike Caulfield(@holden), which has been on my list for several weeks now, but one of his latest posts has floated it back onto my priority list.

To understand what the Smallest Federated Wiki (SFW) is, check out the video. I haven’t personally downloaded and installed yet, which is something I do with all solutions that I’m evaluating. SFW is Node.js, and available on Github, if you want to play with as well--I'm going to be installing on AWS if you need an AMI. This post is all about understanding SFW, and light the fire under my own use of SFW, and hopefully stimulating your interest.

Building off the simplicity of the Wiki, SFW borrows from the best features of Wiki, Github, and rolled together into simple, but ultimately powerful implementation that embraces the latest in technology from Node.js to HTML5. I know how hard it can be to achieve "simple", and while playing with SFW, I can tell a lot of work has gone into keeping things as fucking simple as possible. #win

I love me some Wikipedia and Github, but putting my valuable content, and hard work into someone else’s silo is proving to be a very bad idea. For all of my projects, I want to be able maximize collaboration, syndication and reach, without giving away ownership of my intellectual exhaust (IE) . SFW reflects this emotion, and allows me to browse other people’s work, fork, re-use, while also maintaining my own projects within my silo, and enable other people to fork, and re-use from my work as well--SFW is a sneak peak at how ALL modern applications SHOULD operate.

JSON Extensible
SFW has the look and feel of a new age wiki, allowing you two generate pages and pages of content, but the secret sauce underneath is JSON. Everything on SFW is JSON driven, allowing for unlimited extensibility. MIke's latest blog post on how SFW’s extensibility is unlimited, due to it's JSON driven architecture, is why I'm floating SFW back on my review list. My 60+ API Evangelist projects all start with basic page, and blog content, but then rely on JSON driven research for companies, building blocks, tools, services, and many other data points that I track on for the space—SFW reflects the JSON extensibility I’ve been embracing for the last couple years, but I'm doing this manually, SFW is by default.

Simplicity And Complexity
SFW achieves a simplicity, combined with the ability to extend the complexity in any way you choose. I can create a simple 3 page project site with it, or I could create a federated news application, allowing thousands of people to publish, curate, fork, remix, and collaborate around links—think Reddit, but federated. I envision SFW templates or blueprints, that allow me to quickly deploy a basic project blog, or CRM, news, research, and other more complex solutions. With new cloud deployment options like Docker emerging, I see a future where I can quickly deploy these federated blueprints, on the open web, on-premise, or anywhere I desire.

I have a lot of ideas that I want to contribute to the SFW roadmap, but I need to get more seat hours, playing with the code, before I can intelligently contribute. Once i get my base SFW setup, I will start brainstorming on the role APIs can play in the SFW plugin layer., and scenarios for rapidly building SFW blueprint containers.

P.S. While SFW has been on my Evernote todo list for several weeks, it was Mike's continued storytelling which bumped up the priority. Without the storytelling and evangelism, nothing happens--something Mike references in his post.

from http://ift.tt/1rjXMe0

Tuesday, July 1, 2014

Remembering My Friend Pat Price

Sometimes you meet people, and you automatically know that they are someone you will know for a very long time, with a sense that you’ve known them before, in many previous lives. This was the way I felt when I first met Patrick Price. He was polite, cordial, but quiet when I first met him, but after several conversations, he had a familiar energy to him, that put me at ease pretty quickly.

The first thing I learned about Pat, was that he had an obsessive work ethic. He didn’t just take pride in his work, he was obsessive about making sure things were done, and they were done right--no excuses. When looking back through photos of after work events, where the rest of us were already blowing off steam, Pat was very rarely present, most likely back on location, making sure everything was put away, ready for next day.

If you deserved it, Pat would have your back. If you did not, you wouldn’t. Pat is someone I would have on my side in a gunfight, no matter where in the world, or where in time. He would have stood tall, until the final moments. This is how I picture Pat leaving this world, in a standoff, in a remote part of town, protecting a group of his friends.

When you came to see Pat, he was always on the phone with someone, and you almost always had to wait 10-15 minutes before he had time for you. This was the way it worked, you couldn’t just walk into the office, and he’d have time for you. Pat had a long list of tasks, and people he was dealing with—you always had to accept your place in line, and make the most of it when you could.

When I got the news of his passing, I was overcome with concern that I hadn't stop by to see him, in the latest trip south from Oregon to Los Angeles. Then I remembered all the other amazing pit stops from the past, where I stopped and talked for 30 minutes, went for a drink, or had dinner. If you could wait 15 minutes to see him, he was always good for a meaningful conversation, that went deep, followed by a solid man-hug, before hitting the road again.

Pat was also a constant presence in the background of my digital self. While I cherished my memories of stopping in to say hello in person, I enjoyed his constant presence on every one of my Foursquare checkin around the globe, and Twitter interactions around random topics, places, pics, and experiences. Pat shared my love of food, drink, and good music, and took the opportunity to chime in, on every experience I shared on the Internetz.

I’m going to miss Pat. I will think about him regularly, throughout my life. He will never diminish in my memories, because I know I will see him again soon—for the same reasons, when I first met him, I knew he was my family.

from http://ift.tt/Vam3WI

Wednesday, June 18, 2014

Disrupting The Panel Format At API Craft SF

Last week I particpated in a panel at API Craft San Francisco with Uri Sarid(@usarid), Jakub Nesetril(@jakubnesetril), Tony Tam(@fehguy), moderated by Emmanuel Paraskakis(@manp), at the 3Scale office.

The panel started, and I was the last person in the row of panelists, and Emmanuel asked his first question, passing the microphone to Uri who was first in line, once Uri was done he handed the mic to Jakub, then to Tony, and lastly to me.

As Emmanuel asked his second question I saw the same thing happening. He handed the microphone to Uri, then Jakub, and Tony. Even though the questions were good, the tractor beam of a panel was taking hold, making it more of an assembly line, than a conversation.

I needed to break the energy, and as soon as I got the microphone in my hand I jumped up and made my way through the crowd, around the back, to where the beer was, and helped myself to a fresh Stone Arrogant Bastard (ohh the irony). I could have cut through the middle, but I wanted to circle the entire audience as I slowly gave my response to the question.

With beer in hand I slowly walked back up, making reference to various people in the audience, hoping by the time I once again joined the panel, the panel vibe had been broken, and the audience would be part of the conversation. It worked, and the audience began asking more questions, to which I would jump up and bring the mic to them--making them part of the panel discussion.

I don’t think the panel format is broken, I just think it lends itself to some really bad implementations. You can have a good moderator, and even good panelists, but if you don’t break the assembly line of the panel, and make it a conversation amongst not just the panelists, but also audience—the panel format will almost always fail.

from http://ift.tt/1lFFqMs

Monday, June 9, 2014

Exhaust From Crunching Open Data And Trying To Apply Page Rank To Spreadsheets

I stumbled across a very interesting post on pagerank for spreadsheets. The post is a summary of a talk, but provided an interesting look at trying to understand open data at scale. Something I've tried doing several times, including my Adopt A Federal Government Dataset work. Which reminds me of how horribly out of data it all is.

There is a shitload of data stored in Microsoft Excel, Google Spreadsheet and CSV files, and trying to understand where this data is, and what is contained in these little data stores is really hard. This post doesn’t provide the answers, but gives a very interesting look into what goes into trying to understand open data at scale.

The author acknowledges something I find fascinating, that “search for spreadsheet is hard”—damn straight. He plays with different ways for quantifying the data based upon number columns, rows, content, data size and even file formats.

This type of storytelling from the trenches is very important. Every time I work to download, crunch and make sense of, or quantify open data, I try to tell the story in real-time. This way much of the mental exhaust from the process is public, potentially saving someone else some time, or helping them see it through a different lens.

Imagine if someone made the Google, but just for public spreadsheets. Wish I had a clone!

from http://ift.tt/1uMOczI

Ken Burns: History of Computing

I’m enjoying Mike Amundsen’s keynote from API Strategy & Practice in Amsterdam again, Self-Replication, Strandbeest, and the Game of Life What von Neumann, Jansen, and Conway can teach us about scaling the API economy.

As I listen to Mike’s talk, and other talks like Bret Victor’s “The Future of Programming”, I’m reminded of how important knowing our own history is, and for some strange reason, in Silicon Valley this is we seem to excel at doing the opposite, and making a habit of forgetting our own history of computing.

The conversation around remembering the history of compute came up between Mike Amundsen and I, during the beer fueled discussion in the Taproom at Gluecon in Colorado, last May. As we were discussing the importance of the history of technology, the storytelling approach of Ken Burns came up, and Mike and I were both convinced that Ken Burns needs to do a documentary series on the history of computing.

There is something about the way that Ken Burns does a documentary that can really reach our hearts and minds, and Silicon Valley needs a neatly packaged, walkthrough of our computing history from say 1840 through 1970. I think we’ve heard enough stories about the PC era, Bill Gates and Steve Jobs, and what we need is a brush-up up on the hundreds of other personalities that gave us computing, and ultimately the Internet.

My mother gave me a unique perspective: that I can manifest anything. So I will make this Ken Burns: History of Computing series happen, but I need your help. I need you to submit the most important personalities and stories you know from the history of computing, that should be included in this documentary. To submit, just submit as issue on the Github repository for this site, or if you are feeling adventurous, you submit as Jekyll blog post for this site, and I'll accept your commit.

Keep any submission, focused, and about just a single person, technology or idea. Once we get enough submissions, we can start connecting the dots, weaving together any further narratives. My goal is to generate enough research for Mr. Burns to use when he takes over the creative process, and hopefully to generate enough buzz to get him to even notice that we exist. ;-)

It is my belief that we are at a critical junction where our physical worlds are colliding with this new virtual world, driven by technology. To better understand what is happening, I think we need to pause, and talk a walk through our recent history of compute technology, and learn more about how we got here--I couldn’t think of a better guide, than Ken Burns.

Thanks for entertaining my crazy delusions, and helping me assemble the cast of characters, that Ken Burns can use when crafting The History of Compute. Hopefully we can learn a lot along the way, as well as use the final story to help bring everyone up to speed on this crazy virtual world we’ve created for ourselves.

Photo Credit: Hagley Museum and Library and UNISYS

from http://ift.tt/1pXa6QR

Friday, June 6, 2014

The Black, White And Gray of Web Scraping

There are many reasons for wanting to scrape data or content from a public website. I think these reasons can be easily represented as different shades of gray, the darker the grey being considered less legal, and the lighter the grey more legal you could consider it. You with me?

An example of darker grey would be scraping classified ad listings from craigslist for use on your own site. Where an example of lighter grey could be pulling a listing of veterans hospitals from the Department of Veterans Affairs website for use in a mobile app that supports veterans. One is corporate owned data, and the other is public data. The motives for wanting either set of data would potentially be radically different, and the restrictions on each set of data would be different as well.

Many opponents of scraping don't see the shades of grey, they just see people taking data and content that isn't theirs. Proponents of scraping will have an array of opinions ranging from, if it is on the web, it should be available to everyone, to people who only would scrape openly licensed or public data, and stay away from anything proprietary.

Scraping of data is never a black and white issue. I’m not blindly supporting scraping in any situation, but I'm a proponent of sensible approaches to harvesting of valuable information, development of open source tools, as well as services that assist users in scraping.

from http://ift.tt/1i95vDj

Github Commit Storytelling: Now or Later

When you are making Github commits you have to provide a story that explains the changes you are committing to a repository. Many of us just post 'blah blah’, ‘what I said last time", or any other garbage that just gets us through the moment. You know you’ve all done it at some point.

This is a test, of your ability to tell a story, for the future, to be heard by your future self, or someone else entirely. While in the moment it may seem redundant and worthless, but when you think of the future and how this will look when it is being read by a future employer, or someone that is trying to interpret your work, things will be much different. #nopressure

In the world of Github, especially when your repositories are public, each commit is a test of your storytelling ability and how well you can explain this moment for future generations. How will you do on the test? I would say that I'm C grade, and this post is just a reminder for me.

from http://ift.tt/1i95wY0

Thursday, June 5, 2014

Beta Testing Linkrot.js On API Evangelist

I started beta testing a new JavaScript library, combined with API, that I’m calling linkrot.js. My goal is to address link rot across my blogs. There are two main reasons links are bad on my site, either I moved the page or resource, or a website or other resource has gone away.

To help address this problem, I wrote a simple JavaScript file that lives in the footer of my blog, and when the page loads, it spiders all the links on the page, combining them into a single list and then makes a call to the linkrot.js API.

All new links will get a URL shortener applied, as well as a screenshot taken of the page. Every night a script will run to check the HTTP status of each link used in my site—verifying the page exists, and is a valid link.

Every time link rot.js loads, it will spider the links available in the page, sync with linkrot.js API, and the API returns the corresponding shortened URL, or if a link shows a 404 status, the link will no longer link to page, it will popup the last screenshot of the page, identifying the page no longer exists.

Eventually I will be developing a dashboard, allowing me to manage the link rot across my websites, make suggestions on links I can fix, provides a visual screen capture of those I cannot, while also adding a new analytics layer by implementing shortened URLs.

Linkrot.js is just an internal tool I’m developing in private beta. Once I get up and running, Audrey will beta test, and we’ll see where it goes from there. Who knows!

from http://ift.tt/1mgW2ek

Beta Testing Linkrot.js API Evangelist

I started beta testing a new JavaScript library, combined with API, that I’m calling linkrot.js. My goal is to address link rot across my blogs. There are two main reasons links are bad on my site, either I moved the page or resource, or a website or other resource has gone away.

To help address this problem, I wrote a simple JavaScript file that lives in the footer of my blog, and when the page loads, it spiders all the links on the page, combining them into a single list and then makes a call to the linkrot.js API.

All new links will get a URL shortener applied, as well as a screenshot taken of the page. Every night a script will run to check the HTTP status of each link used in my site—verifying the page exists, and is a valid link.

Every time link rot.js loads, it will spider the links available in the page, sync with linkrot.js API, and the API returns the corresponding shortened URL, or if a link shows a 404 status, the link will no longer link to page, it will popup the last screenshot of the page, identifying the page no longer exists.

Eventually I will be developing a dashboard, allowing me to manage the link rot across my websites, make suggestions on links I can fix, provides a visual screen capture of those I cannot, while also adding a new analytics layer by implementing shortened URLs.

Linkrot.js is just an internal tool I’m developing in private beta. Once I get up and running, Audrey will beta test, and we’ll see where it goes from there. Who knows!

from http://ift.tt/1p0pATR

Google Accounts As Blueprint For All Software as a Service Applications

While there are many things I don’t agree with Google about, but they are pioneers on the Internet, and in some cases have the experience to lead in some very important ways. In this scenario I’m thinking about Google Account management, and how it can be used as blueprint for all other Software as a Service (SaaS) applications.

During a recent visit to my Google account manager, I was struck by the importance of all the tools that were made available to me.

Account Settings
Google gives you the basic level control to edit your profile, adding, updating the information you feel is relevant.

Google gives you the password level control, but then steps up with security with 2-step verification, and application specific passwords.

Manage Apps
Google provides a clean application manager, allowing you to control who has access to your account via the API. You can revoke any app, as well as see how they are accessing your data--taking advantage of the oAuth 2.0, that is a standard across all Google systems.

Platform Apps
The management of applications is not exclusive to 3rd party applications. Google gives you insight into how they accessing your account as well. This view of the platform is critical to providing a comprehensive lens into how your data is used, and in establishing trust.

Data Tools
Google rocks it when it comes to data portability, with their data dashboard that allows you to view your data, as well as the option download your data at any point, via the Google Takeout system which gives you direct access to over all your data across Google systems.

API Access
Google has long been a leader in the API space, providing over 100 (at last count) APIs. Most any application you use on Google platform will have an API to allow for deeper integration into other applications and platforms.

Logging It All
A complete activity log is provided, in addition to being able to see how specific applications are accessing data. This easy access to logs is essential for users to understand how their data is being accessed and put to use.

There are other goodies in the Google Account management, but these seven areas, provide a blueprint that I think ANY software as a service should provide as default for ALL users. I’m not even kidding. This should be the way ALL online services work, and users should be educated about why this is important.

I’m going to continue to work on this blueprint, as a side project, and start harassing service providers. ;-)

from http://ift.tt/1p0j939

That Has Already Failed Dumbass

I a always amused when someone jumps on an idea of mine, or one of my projects, proceed to tell me how stupid I am, then point to some similar technology or approach had previously failed. First I always take it as a lesson, and make sure I fully understand what they are referencing, and spend some time understanding what happened.

There are plenty of previous technology implementations out there, successful and failed that we should learn from, and I would never miss a opportunity to better understand history, regardless of how I'm introduced to the concept.

Even with the lessons that come with these outreach efforts, I can't help but think how absurd this type of trollish behavior is, and that because something has failed, I should not be trying it again? Most often these efforts are shallow, with people not even reading my full post or understanding what I'm trying to do, and they quickly associate it to their world and tell me how stupid I am to try something that has already failed.

The lack of critical analysis is clear, because if you really think about things operate in real life, at least for me, when I fail, I work to learn as quickly as I can, and keep trying until I succeed. I would never see a failure as a stopping point, I see failure as opportunities to learn from, regroup and try again.

Although I think that many of these folks who feel the need to reach out, and remind you of previous failures, probably do not learn from failures in their lives, they just recoil from each failure, pull back and never do those things again. This stance make for a rich environment to shoot down other people’s ideas, and because of insecurities, they always feel compelled to try and tear you down.

Maybe next time you want to tell someone their idea is bad, because it failed, you might want to say, “have you taken a look at the previous failures of X, to see what you can learn?” Rather than just shooting someone down.

from http://ift.tt/UftmfE

I Cannot Sit Idly By As Technology Marches Forward

I’ve given up some pretty cushy jobs in my career. One defining aspect of the failure of my previous marriage was my inability just accept my role, sit idly by enjoying the benefits of being a good employee in a small town. From my smalltime existence in Eugene, Oregon I saw the Internet unfold, and beginning in 2005 I saw the potential of this new breed of web services that were built on HTTP.

I saw what the Internet was going to do to our society and culture, and had a sense that it wasn't going to be all good. With this in mind I had this nagging feeling that I had to work hard to understand these transformative technologies and approaches, study how leading companies were putting them to use. I knew I couldn't rest until I understand, and I operated at a scale that would matter and it would be possible have a voice that could influence how we use these new technologies.

Almost ten years later, while I still feel I cannot rest, I feel like I’m finally reaching the scope in both the size of conversations, my own reputation and the global reach that I need to make an impact. At times I see the machine clearly, and find myself getting to close. I saw this as I worked in the enterprise, immersed myself within the startup culture in silicon valley, and moved to DC to work for the white house.

Each of these times when I got to close to the machines I was trying to understand, I could feel the heat, and hear the gears grinding all around me. All of these experiences have given me an understanding of how the machine works, but also how close I can get to it before I risk being consumed by the very beast I wish to understand and help influence.

I may not have the stamina and fortitude to continue for decades, but I cannot just sit idly by as technology blindly marches forward. I have to understand how it all works and push back, hoping to influence the course we take. While I’m not naive enough that I can single handedly change everything, but with persistence I can grind against the machine and slow its march into negative areas, and possibly force it to move in more positive ways, that can actually benefit society and our children’s future.

from http://ift.tt/Uftls2

Friday, May 23, 2014

One Characteristic Of Many Of The Enterprise API Folks I Meet

When I run into enterprise folks at events, one of the common characteristics I notice, is they always tell me how much they read my blog. Yay! Many of these people have Twitter accounts, which follow me and I follow them, and they can usually reference specific topics or posts I've written—demonstrating they do indeed read.

Most of these people I'm aware of online, and I usually consider them fence sitters. They rarely retweet posts, or engage in conversations online, they just consume. I think this is fine, because not all everyone is suited for actively engaging in the social media world. What I do think is interesting is how interested they are in my work, and they let me know how my work reaches them, and reference specific topics and stories, but don’t actually contribute to the conversation.

In my opinion this isn't individuals faults, this is enterprise culture. Businesses of this scale are not equipped to deliver value unless it is sanctioned and specifically part of the larger brand. The enterprise generates value, but only when in service of their business objectives. Generating open value for a community, even as small as a retweet, comment, or a response in blog post, is not in the DNA.

I strongly believe that businesses should generate just as much value as they consume. I’m not stupid. I understand that capitalism is about extracting value and monetizing for shareholders, but can’t help but think about what this existence is like for these individuals.

Personally, I find it very rewarding to contributing to communities, and the overall health of the API community, by contributing ideas, engaging in conversations, without the expectation that it will all result in revenue for API Evangelist. Ultimately all of this effort comes back to me, and ensures I will be able to sustain my evangelism efforts, while also nourishing my own individual needs.

from http://ift.tt/1kxK7vA

Wednesday, May 7, 2014

Partnering For Me Is About Sharing Of Ideas, Research and Stories

I just turned down a potential partnership with a major enterprise company. As I do with many stories, I will scrub the names of those involved, because there is no reason to blame a single company, this is a lesson any large entity can learn from—for this story, we’ll just call them Acme Inc.

Acme Inc. contacted me a couple weeks ago, stating they were looking to do some research into the API space, and have been following my work for some time. After a few emails we made time to get on the phone and talk about what each of us were up to.

We spent about an hour, where I gave my history, why I do API Evangelist, and how I go about generating short form, and long form content as part of my research across the API space. Acme Inc. shared their interest in exploring how APIs could be applied to a couple of business sectors, and were looking to generate white paper(s) that they could distribute internally, and amongst partners.

Acme was extremely vague on details, and I understand that not everyone can be as open as me, especially when you work at a big company. We ended the call, agreeing that we’d explore what a potential partnership might look like, around research projects, but they would need me to sign an NDA first—no problem.

I received an NDA via email a couple days later, as I was on my way to IBM Impact in Las Vegas, and while I was busy in Las Vegas I received an a reminder to please sign the agreement. When I was done in Las Vegas, on my way to Texas for another engagement, I signed the NDA, printed a PDF and sent it back. A few days later, as I was leaving Texas, on my way home to LA for about 8 hours, before I left to Berlin, I received another email requesting that I sign a second NDA, this time using a digital signature solution.

At this point my inbox was already out of control, and I boarded the 12 hour flight Berlin, and proceeded to gear up for APIDays Berlin. After a day in Berlin where I finished my talk (which I rewrote onsite to better suit the local audience), and two days of the amazing interactions at API Days Berlin, I finally made it back into my inbox.

This time I found another reminder, to sign the digital copy of NDA. At this point, I’ve received 2 separate NDAs, and two reminders to please sign the NDA--ith no knowledge of what the ideas are. Maybe these are ideas or research I’m already working on? Who the fuck knows!

I just sent an email to my Acme representative, stating that I will regretfully decline the partnership, and summarized my feelings. Acme thanked me.

With this response, I probably blew off thousands of dollars in revenue, and who knows what else. In the end I don’t give a fuck. What I do is not about revenue or showcasing partnerships with global companies. It is about ideas, research and my storytelling, whether it is short form (blog) or longer form (whtie papers), or in person, and always in a way that provides value to the community.

I understand that a large entity cannot operate like I do--I’m not naive. However when it comes to the world of APIs, if you want to play, you have to learn to open up, and at least be able to have conversations without the need for an NDA. People in the enterprise might fuck you over at every turn, but I do not. You can ask me to keep something private, and I will—end of story.

This is the fundamental difference between APIs and SOA that so many enterprise practitioners do not understand. As an individual I am so much happier being able to openly share my ideas, stories, code, and other resources publicly, in a way that is openly licensed, and most importantly—accessible by everyone. I strongly feel that there is a lot for the enterprise to learn from the world of open APIs, which isn’t just about technology. However, I’m gambling that most of them will never give a flying fuck.

from http://ift.tt/1kNi6N0

Thursday, May 1, 2014

APIs, edX, Tableau, Google At UT Arlington

After I went to Emory University in Atlanta, and spoke at IBM Impact in Las Vegas this week, I attended a one day of a planning session for an edX course at UT Arlington, in Texas.

George Siemens (@gsiemens) invited me out to for the planning session, which included folks edX, Tableau, Google and of course UT Arlington. I gave a one hour talk on APIs, a talk I had originally prepared to be about APIs in higher education, but after listening to the discussion for a couple hours, I crafted an entirely new talk.

The planning session was called "DesignJam for edX MOOC on Data, Analytics, and Learning”--which was all about developing an online course around data and analytics, where pulling information via APIs would play a central role.

Originally I was focused on evolving the edX platform using APIs, something I will be exploring more after talking with the folks from edX. However, in the end, I focused on how we could teach hundreds, or potentially thousands of students, about using APIs as part of a wider data and analytics class.

The session at UT Arlington ended up being a great experience, thinking about the potential of APIs for edX, what a class about data and analytics using APIs would look like, and a full day of hanging with some really interesting academics, while getting an opportuity to observe the process around planning in this fast growing world of online courses.

Just like with the domain exploration at Emory University, experience at IBM Impact in Vegas, I have a lot of notes to process, and I’m sure I’ll have more to say about what happened in Texas.

from http://ift.tt/R8UQRV