Wednesday, June 18, 2014

Disrupting The Panel Format At API Craft SF

Last week I particpated in a panel at API Craft San Francisco with Uri Sarid(@usarid), Jakub Nesetril(@jakubnesetril), Tony Tam(@fehguy), moderated by Emmanuel Paraskakis(@manp), at the 3Scale office.

The panel started, and I was the last person in the row of panelists, and Emmanuel asked his first question, passing the microphone to Uri who was first in line, once Uri was done he handed the mic to Jakub, then to Tony, and lastly to me.

As Emmanuel asked his second question I saw the same thing happening. He handed the microphone to Uri, then Jakub, and Tony. Even though the questions were good, the tractor beam of a panel was taking hold, making it more of an assembly line, than a conversation.

I needed to break the energy, and as soon as I got the microphone in my hand I jumped up and made my way through the crowd, around the back, to where the beer was, and helped myself to a fresh Stone Arrogant Bastard (ohh the irony). I could have cut through the middle, but I wanted to circle the entire audience as I slowly gave my response to the question.

With beer in hand I slowly walked back up, making reference to various people in the audience, hoping by the time I once again joined the panel, the panel vibe had been broken, and the audience would be part of the conversation. It worked, and the audience began asking more questions, to which I would jump up and bring the mic to them--making them part of the panel discussion.

I don’t think the panel format is broken, I just think it lends itself to some really bad implementations. You can have a good moderator, and even good panelists, but if you don’t break the assembly line of the panel, and make it a conversation amongst not just the panelists, but also audience—the panel format will almost always fail.



from http://ift.tt/1lFFqMs

Monday, June 9, 2014

Exhaust From Crunching Open Data And Trying To Apply Page Rank To Spreadsheets

I stumbled across a very interesting post on pagerank for spreadsheets. The post is a summary of a talk, but provided an interesting look at trying to understand open data at scale. Something I've tried doing several times, including my Adopt A Federal Government Dataset work. Which reminds me of how horribly out of data it all is.

There is a shitload of data stored in Microsoft Excel, Google Spreadsheet and CSV files, and trying to understand where this data is, and what is contained in these little data stores is really hard. This post doesn’t provide the answers, but gives a very interesting look into what goes into trying to understand open data at scale.

The author acknowledges something I find fascinating, that “search for spreadsheet is hard”—damn straight. He plays with different ways for quantifying the data based upon number columns, rows, content, data size and even file formats.

This type of storytelling from the trenches is very important. Every time I work to download, crunch and make sense of, or quantify open data, I try to tell the story in real-time. This way much of the mental exhaust from the process is public, potentially saving someone else some time, or helping them see it through a different lens.

Imagine if someone made the Google, but just for public spreadsheets. Wish I had a clone!



from http://ift.tt/1uMOczI

Ken Burns: History of Computing

I’m enjoying Mike Amundsen’s keynote from API Strategy & Practice in Amsterdam again, Self-Replication, Strandbeest, and the Game of Life What von Neumann, Jansen, and Conway can teach us about scaling the API economy.

As I listen to Mike’s talk, and other talks like Bret Victor’s “The Future of Programming”, I’m reminded of how important knowing our own history is, and for some strange reason, in Silicon Valley this is we seem to excel at doing the opposite, and making a habit of forgetting our own history of computing.

The conversation around remembering the history of compute came up between Mike Amundsen and I, during the beer fueled discussion in the Taproom at Gluecon in Colorado, last May. As we were discussing the importance of the history of technology, the storytelling approach of Ken Burns came up, and Mike and I were both convinced that Ken Burns needs to do a documentary series on the history of computing.

There is something about the way that Ken Burns does a documentary that can really reach our hearts and minds, and Silicon Valley needs a neatly packaged, walkthrough of our computing history from say 1840 through 1970. I think we’ve heard enough stories about the PC era, Bill Gates and Steve Jobs, and what we need is a brush-up up on the hundreds of other personalities that gave us computing, and ultimately the Internet.

My mother gave me a unique perspective: that I can manifest anything. So I will make this Ken Burns: History of Computing series happen, but I need your help. I need you to submit the most important personalities and stories you know from the history of computing, that should be included in this documentary. To submit, just submit as issue on the Github repository for this site, or if you are feeling adventurous, you submit as Jekyll blog post for this site, and I'll accept your commit.

Keep any submission, focused, and about just a single person, technology or idea. Once we get enough submissions, we can start connecting the dots, weaving together any further narratives. My goal is to generate enough research for Mr. Burns to use when he takes over the creative process, and hopefully to generate enough buzz to get him to even notice that we exist. ;-)

It is my belief that we are at a critical junction where our physical worlds are colliding with this new virtual world, driven by technology. To better understand what is happening, I think we need to pause, and talk a walk through our recent history of compute technology, and learn more about how we got here--I couldn’t think of a better guide, than Ken Burns.

Thanks for entertaining my crazy delusions, and helping me assemble the cast of characters, that Ken Burns can use when crafting The History of Compute. Hopefully we can learn a lot along the way, as well as use the final story to help bring everyone up to speed on this crazy virtual world we’ve created for ourselves.

Photo Credit: Hagley Museum and Library and UNISYS



from http://ift.tt/1pXa6QR

Friday, June 6, 2014

The Black, White And Gray of Web Scraping

There are many reasons for wanting to scrape data or content from a public website. I think these reasons can be easily represented as different shades of gray, the darker the grey being considered less legal, and the lighter the grey more legal you could consider it. You with me?

An example of darker grey would be scraping classified ad listings from craigslist for use on your own site. Where an example of lighter grey could be pulling a listing of veterans hospitals from the Department of Veterans Affairs website for use in a mobile app that supports veterans. One is corporate owned data, and the other is public data. The motives for wanting either set of data would potentially be radically different, and the restrictions on each set of data would be different as well.

Many opponents of scraping don't see the shades of grey, they just see people taking data and content that isn't theirs. Proponents of scraping will have an array of opinions ranging from, if it is on the web, it should be available to everyone, to people who only would scrape openly licensed or public data, and stay away from anything proprietary.

Scraping of data is never a black and white issue. I’m not blindly supporting scraping in any situation, but I'm a proponent of sensible approaches to harvesting of valuable information, development of open source tools, as well as services that assist users in scraping.



from http://ift.tt/1i95vDj

Github Commit Storytelling: Now or Later

When you are making Github commits you have to provide a story that explains the changes you are committing to a repository. Many of us just post 'blah blah’, ‘what I said last time", or any other garbage that just gets us through the moment. You know you’ve all done it at some point.

This is a test, of your ability to tell a story, for the future, to be heard by your future self, or someone else entirely. While in the moment it may seem redundant and worthless, but when you think of the future and how this will look when it is being read by a future employer, or someone that is trying to interpret your work, things will be much different. #nopressure

In the world of Github, especially when your repositories are public, each commit is a test of your storytelling ability and how well you can explain this moment for future generations. How will you do on the test? I would say that I'm C grade, and this post is just a reminder for me.



from http://ift.tt/1i95wY0

Thursday, June 5, 2014

Beta Testing Linkrot.js On API Evangelist

I started beta testing a new JavaScript library, combined with API, that I’m calling linkrot.js. My goal is to address link rot across my blogs. There are two main reasons links are bad on my site, either I moved the page or resource, or a website or other resource has gone away.

To help address this problem, I wrote a simple JavaScript file that lives in the footer of my blog, and when the page loads, it spiders all the links on the page, combining them into a single list and then makes a call to the linkrot.js API.

All new links will get a URL shortener applied, as well as a screenshot taken of the page. Every night a script will run to check the HTTP status of each link used in my site—verifying the page exists, and is a valid link.

Every time link rot.js loads, it will spider the links available in the page, sync with linkrot.js API, and the API returns the corresponding shortened URL, or if a link shows a 404 status, the link will no longer link to page, it will popup the last screenshot of the page, identifying the page no longer exists.

Eventually I will be developing a dashboard, allowing me to manage the link rot across my websites, make suggestions on links I can fix, provides a visual screen capture of those I cannot, while also adding a new analytics layer by implementing shortened URLs.

Linkrot.js is just an internal tool I’m developing in private beta. Once I get up and running, Audrey will beta test, and we’ll see where it goes from there. Who knows!



from http://ift.tt/1mgW2ek

Beta Testing Linkrot.js API Evangelist

I started beta testing a new JavaScript library, combined with API, that I’m calling linkrot.js. My goal is to address link rot across my blogs. There are two main reasons links are bad on my site, either I moved the page or resource, or a website or other resource has gone away.

To help address this problem, I wrote a simple JavaScript file that lives in the footer of my blog, and when the page loads, it spiders all the links on the page, combining them into a single list and then makes a call to the linkrot.js API.

All new links will get a URL shortener applied, as well as a screenshot taken of the page. Every night a script will run to check the HTTP status of each link used in my site—verifying the page exists, and is a valid link.

Every time link rot.js loads, it will spider the links available in the page, sync with linkrot.js API, and the API returns the corresponding shortened URL, or if a link shows a 404 status, the link will no longer link to page, it will popup the last screenshot of the page, identifying the page no longer exists.

Eventually I will be developing a dashboard, allowing me to manage the link rot across my websites, make suggestions on links I can fix, provides a visual screen capture of those I cannot, while also adding a new analytics layer by implementing shortened URLs.

Linkrot.js is just an internal tool I’m developing in private beta. Once I get up and running, Audrey will beta test, and we’ll see where it goes from there. Who knows!



from http://ift.tt/1p0pATR

Google Accounts As Blueprint For All Software as a Service Applications

While there are many things I don’t agree with Google about, but they are pioneers on the Internet, and in some cases have the experience to lead in some very important ways. In this scenario I’m thinking about Google Account management, and how it can be used as blueprint for all other Software as a Service (SaaS) applications.

During a recent visit to my Google account manager, I was struck by the importance of all the tools that were made available to me.

Account Settings
Google gives you the basic level control to edit your profile, adding, updating the information you feel is relevant.

Security
Google gives you the password level control, but then steps up with security with 2-step verification, and application specific passwords.

Manage Apps
Google provides a clean application manager, allowing you to control who has access to your account via the API. You can revoke any app, as well as see how they are accessing your data--taking advantage of the oAuth 2.0, that is a standard across all Google systems.

Platform Apps
The management of applications is not exclusive to 3rd party applications. Google gives you insight into how they accessing your account as well. This view of the platform is critical to providing a comprehensive lens into how your data is used, and in establishing trust.

Data Tools
Google rocks it when it comes to data portability, with their data dashboard that allows you to view your data, as well as the option download your data at any point, via the Google Takeout system which gives you direct access to over all your data across Google systems.

API Access
Google has long been a leader in the API space, providing over 100 (at last count) APIs. Most any application you use on Google platform will have an API to allow for deeper integration into other applications and platforms.

Logging It All
A complete activity log is provided, in addition to being able to see how specific applications are accessing data. This easy access to logs is essential for users to understand how their data is being accessed and put to use.

There are other goodies in the Google Account management, but these seven areas, provide a blueprint that I think ANY software as a service should provide as default for ALL users. I’m not even kidding. This should be the way ALL online services work, and users should be educated about why this is important.

I’m going to continue to work on this blueprint, as a side project, and start harassing service providers. ;-)



from http://ift.tt/1p0j939

That Has Already Failed Dumbass

I a always amused when someone jumps on an idea of mine, or one of my projects, proceed to tell me how stupid I am, then point to some similar technology or approach had previously failed. First I always take it as a lesson, and make sure I fully understand what they are referencing, and spend some time understanding what happened.

There are plenty of previous technology implementations out there, successful and failed that we should learn from, and I would never miss a opportunity to better understand history, regardless of how I'm introduced to the concept.

Even with the lessons that come with these outreach efforts, I can't help but think how absurd this type of trollish behavior is, and that because something has failed, I should not be trying it again? Most often these efforts are shallow, with people not even reading my full post or understanding what I'm trying to do, and they quickly associate it to their world and tell me how stupid I am to try something that has already failed.

The lack of critical analysis is clear, because if you really think about things operate in real life, at least for me, when I fail, I work to learn as quickly as I can, and keep trying until I succeed. I would never see a failure as a stopping point, I see failure as opportunities to learn from, regroup and try again.

Although I think that many of these folks who feel the need to reach out, and remind you of previous failures, probably do not learn from failures in their lives, they just recoil from each failure, pull back and never do those things again. This stance make for a rich environment to shoot down other people’s ideas, and because of insecurities, they always feel compelled to try and tear you down.

Maybe next time you want to tell someone their idea is bad, because it failed, you might want to say, “have you taken a look at the previous failures of X, to see what you can learn?” Rather than just shooting someone down.



from http://ift.tt/UftmfE

I Cannot Sit Idly By As Technology Marches Forward

I’ve given up some pretty cushy jobs in my career. One defining aspect of the failure of my previous marriage was my inability just accept my role, sit idly by enjoying the benefits of being a good employee in a small town. From my smalltime existence in Eugene, Oregon I saw the Internet unfold, and beginning in 2005 I saw the potential of this new breed of web services that were built on HTTP.

I saw what the Internet was going to do to our society and culture, and had a sense that it wasn't going to be all good. With this in mind I had this nagging feeling that I had to work hard to understand these transformative technologies and approaches, study how leading companies were putting them to use. I knew I couldn't rest until I understand, and I operated at a scale that would matter and it would be possible have a voice that could influence how we use these new technologies.

Almost ten years later, while I still feel I cannot rest, I feel like I’m finally reaching the scope in both the size of conversations, my own reputation and the global reach that I need to make an impact. At times I see the machine clearly, and find myself getting to close. I saw this as I worked in the enterprise, immersed myself within the startup culture in silicon valley, and moved to DC to work for the white house.

Each of these times when I got to close to the machines I was trying to understand, I could feel the heat, and hear the gears grinding all around me. All of these experiences have given me an understanding of how the machine works, but also how close I can get to it before I risk being consumed by the very beast I wish to understand and help influence.

I may not have the stamina and fortitude to continue for decades, but I cannot just sit idly by as technology blindly marches forward. I have to understand how it all works and push back, hoping to influence the course we take. While I’m not naive enough that I can single handedly change everything, but with persistence I can grind against the machine and slow its march into negative areas, and possibly force it to move in more positive ways, that can actually benefit society and our children’s future.



from http://ift.tt/Uftls2