Monday, February 23, 2015

Making Sense At The100K Level: Twitter, Github, And Google Groups

I try to make sense of which companies are doing interesting things in the API space, and the interesting technologies that are done by these companies, that sometimes take on a life of their own. The thing I wrestle with with constantly, is how do you actually do this? The best tools in my toolbox currently are Twitter and Github. These two platforms provide me with a wealth of information about what is going on within a company, or specific project, the surrounding community, and the relationships they have developed (or not), along the way.

Recently I’ve been spending time, diving deeper into the Swagger community, and two key sources of information are the @swaggerapi Twitter account, and the Swagger Github account, with its 25+ repositories. Using each of these platform APIs, I can pull followers, favorites, and general activity for the Swagger community. Then I come up against the SwaggerSocket Google Group. While there is a rich amount of information, and activity at the forum, with a lack of RSS or API, I can’t make sense of the conversation at a macro level, alongside the other signals I’m tracking on—grrrrrr.

At any time I can tune into the activity on Twitter, and Github for the Swagger community, but the Google Group takes much more work, and I have to go to the website to view, and manually engage. Ideally I could see Twitter, Github, and Google Group activity side by side, and make sense of the bigger picture. I can get email updates from the forum, but this applies from now forward, and gives me no context of history of the conversation within the group—without visiting the website.

Just a side rant from the day. This is not a critique of the Swagger community, just an outside view on the usage of Google Groups as an API community management tool. I use the platform for APIs.json and API Commons, but I think I might work on a better way to manage the community, one that allows outside entities to better track on the conversation. 



from http://ift.tt/1Eqi2jU

Sunday, February 8, 2015

Emails From People Saying Nice Things And Not Wanting Anything From Me

I process my thoughts through stories on my blogs, and often times you'll find me bitching about people and companies here on kinlane.com. Other times you'll find me waxing poetic about how nice people can be—welcome to my bipolar blogging world.

In this post, I want to say how much I like finding nice emails from people in my inbox, especially when they don’t want anything from me. Getting these nice notes from people, about specific stories, topics, or just generally thanking me for what I do, makes it all worth it.

Ok, I'll stop gushing, but I just wanted to say thank you—you know who you are.



from http://ift.tt/1A91Cee

Friday, February 6, 2015

An Archive.org For Email Newsletters Using Context.io

I’m not going to beat around the bush on this idea, it just needs to get done, and I just don’t have the time. We need an archive.org for email newsletters, and other POP related elements of the digital world we have created for ourselves. Whether we love or hate the inbox layer of our life, it plays a significant role in crafting our daily reality. Bottom line, we don’t always keep the history that happens, and we should be recording it all, so that we can pause, and re-evaluate at any point in the future.

I cannot keep up with the amount of newsletters flowing into my inbox, but I do need to be able to access this layer, as I have the bandwidth available to process. Using Context.io, I need you to create an index of popular email newsletter indexes that are emerging. I feel like we are seeing a renaissance in email, in the form of the business newsletter--something I don't always have the time to participate in.

During the course of my daily monitoring, I received an email from Congress.gov, about a new legislative email newsletter, something that seems like something I’d be interested in, but then immediately I’m questioning my ability to process the new information:

  • A specific bill in the current Congress - Receive an email when there are updates to a specific bill (new cosponsors, committee action, vote taken, etc.); emails are sent once a day if there has been a change in a particular bill’s status since the previous day.
  • A specific member’s legislative activity - Receive an email when a specific member introduces or cosponsors a bill; emails are sent once a day if a member has introduced or cosponsored a bill since the previous day.
  • Congressional Record - Receive an email as soon as a new issue of the Congressional Record is available on Congress.gov.

This is all information I’m willing to digest, but ultimately have to weight it alongside the rest of my information diet—a process that isn’t always equitable. If I could acknolwedge an email newsletter as something that I’m interested in, but only when I had time, I would be open to the adoption of a new service.

We need to record this layer of our history, something that our inboxes just aren’t doing well enough. I think we need a steward to step up, and be the curator of this important content that is being sent to our inboxes, and doesn’t always exist on the open Internet. Technically I do not think it would be too difficult to it using Context.io, I just think someone needs to spend a lot of time signing up for newsletters, and being creative in crafting the interface, and index for people to be able to engage with in meaningful ways, that people will actually find useful and pay for.



from http://ift.tt/1KoYwWh

Tuesday, February 3, 2015

A Machine Readable Version of The Presidents Fiscal Year 2016 Budget On Github

The release of the the president's fiscal year 2016 budget in a machine readable format on Github was one of the most important things to come out of Washington D.C. in a while when it comes to open data and APIs. I was optimistic when the president mandated that all federal agencies need to go machine readable by default, but the release of the annual budget in this way is an important sign that the White House is following its own open data rhetoric, and something every agency should emulate.

There is still a lot of work to be done to make sense of the federal budget, but having it published in a machine readable format on Github saves a lot of time, and energy in this process. As soon as I landed on the Github repository, clicked into the data folder, and saw the three CSV files, I got to work converting them to JSON format. Having the budget available in CSV is a huge step beyond the historic PDFs we’ve had to process in the past, to get at the budget numbers, but having it in JSON by default, would be even better.

What now? Well, I would like to make more sense of the budget, and to be able to slice and dice it in different ways, I’m going to need an API. Using a Swagger definition, I generated a simple server framework using Slim & PHP, with an endpoint for each file, budauth, outlays, and receipts. Now I just need to add some searching, filtering, paging, and other essential functionality, and it will be ready for public consumption--then I can get to work slicing and dicing the budget, and previous years budgets in different ways.

I already have my eye on a couple D3.js visualizations to help me make sense of the budget. First I want to be able to show the scope of budget for different areas of government, to help make the argument against bloat in areas like military. Second, I want to provide some sort of interactive tool that will help me express what my priorities are when it comes to the federal budget--something I've done in the past.

It makes me very happy to see the federal government budget expressed in a machine readable way on Github. Every city, county, state, and federal government agency should be publishing their budgets in this way. PDF is not longer acceptable, in 2015, the minimum bar for government budget is a CSV on Github—let’s all get to work!



from http://ift.tt/1DCnz46