Saturday, October 19, 2013

Transparency Is Not Just About Github, Crowdsourcing, Open Source And Open APIs

I wrote a piece on the rollout of, and while there are numerous illnesses in the government that contributed to the launch being such a failure, my analysis took it up to the highest level possible, where the biggest problem can be attributed to a lack of transparency.

The post got a lot of comments via Twitter, LinkedIn, Facebook and other conversation threads I participated in from people who disagreed with me and kept interpreting my use of transparency as referring to using Github, crowdsourcing, open source software or APIs. Stating that these elements would not have saved the project, and we just needed to fix government contracting and get the right people on the job.

These responses fascinate me and reflect what I see from technologists across the space. Developers often bring their luggage with them and don't engage with people or read articles entirely, they bring their understanding of a certain word, attach and plow forward without critical analysis or deeper background research. I'm not exempt from this, I work hard to reverse this characteristic in my own personality.

What I mean by transparency is about letting the sunlight in to your overall operations, by default. In the case of, one of the numerous contractors applied this on front-end development, but the entire rest of the supply chain did not. The front-end group used Github, open source software, APIs and did crowdsource their work at several critical points of the development cycle. However, even this represents just the visible building blocks, not the resulting effects of "being transparent".

First and foremost, this approach to projects makes you the developer, project or product manager think differently about how you structure things. You know that your work will see the light of day and be potentially scrutinized by others. This makes you immediately think differently about how you work. There is no hiding in the shadows, where mistakes, cut-comers and your shortcomings cannot be hidden from the public.

Even if you don't use Github, listen to any comments or issues raised the public and keep all software proprietary and talk directly with code libraries and your database, but showcase the project work out in the open, you will see the benefits of transparency. It is just so happens that Github, establishing feedback loops, open source software and APIs help amplify transparency, and let in the healing benefits of sunlight.

There are numerous reasons I hear for NOT doing this. The true reasons are usually masked with the amount of additional resources needed for doing it this way or lack of expertise in open source projects, but really they tend to mask incompetency, insecurity, corruption or deep rooted beliefs that protecting your intellectual property will result in more money to be made.

Transparency isn't about a specific tools, platform or process. It is about opening up, letting other people in or possibly being almost entirely public in everything you do. Now I agree that not everyone is ready for this approach, and it may not be suited for every business sector, but I think you'd be surprised how easy it actually is, and how it can help you learn, grow and reduce the spread of illnesses within your project life cycle that may eventually cause you to fail.


Added Video of My API Talk at #OpenVA at University of Mary Washington

I gave a talk as part of the Mind the Future discussion at the #OpenVA gathering on the University of Mary Wasington campus last Monday.  

My talk was focused on helping educational institutions the role APIs will play in the future of education, and helping ensure web literacy across our society.

You can find the slides from my talk, along with my other talks on Github. I've added the talk to my video section of my site, but you can also view below. 


Tuesday, October 15, 2013

Securing Site That Runs on Github Pages With JSON Backend In Private Repository

I have been deploying websites that run 100% on Github, using Github Pages and Jekyll for a while now. I'm pushing forward with different approaches to deploying sites and applications using this model, and my recent evolution is securing a website, only allowing specific people to access and interact with the site or application.

In this case, I have a web application that I am developing, and will run on Github, but I'm not ready for it to be public. So I created a private repository, then using the Automatic Page Generator under Github settings, I created a public site for the repository using Github Pages.

Next I created a JSON file that contained the navigation for the site, and each page and its content:

I put this JSON file in the master branch of the repository, which is private. After that, using Github.js I wrote a little bit of JavaScript using Jquery that pulled the JSON from the master branch and built the navigation and page content on the page:

Before the page will build, you have to have a valid oAuth token for the repository. In this particular scenario I am just passing the oAuth token through URL as a parameter, and if the variable isn't present or is invalid, the request for the JSON file just returns a 404 and none of the navigation or site content is updated. For other version I will be using to secure the application and just add people as Github team members if I want them to have access to the application.

Once I'm done with this particular application, and I am ready to make public I will just make the Github repository public and replace the pulling of master JSON file with a regular JQuery getJSON call, and use the JSON to build the site just like I do now.

This approach is definitely not for all applications, but easily allows me to run applications on Github and maintain a private, secure back-end. I just use Github OAuth security to access any files I want to keep private, and make only what I need public. In this case, unless you have access you just see a splash page.


Tuesday, October 8, 2013

Thoughts On Being An Employee

I am entering my first day as a furloughed government worker. I've been suiting up and going to work each day for almost two months. I spend each day going from meeting to meeting, working to carve out 15 minutes here and 15 minutes there to get actual work done.

Today is the first day I didn't suit up and go anywhere. I rolled out of bed, made coffee and got to work reading my feeds, sorting through emails and working through my Evernote notes and tasks. Then I got to work tackling some of the low hanging fruit on my to do list.

While it will probably take me a few days to get back into my old rhythm of productivity, i'm already finding some mojo to get things done. I'm struggling with shedding some of the framework of the employee framework that I've been subjected to, for even just two short months. I can see how people have difficulty in going from having a job to being freelance. Luckily I have the skills, discipline and mindset to pull from, so it shouldn't take me long to get back to normal.

This small glimpse gives me some insight into the damage our current employee framework does to people's creativity and productivity. The rituals of the commute, lunch breaks, meetings, coffee from Starbucks and other items not only take up our days, they drain our energy making us much more exhausted each evening.

I don't think freelance and / or working from home is for everyone. The employee role is not going anywhere, but I really think as businesses, we have to consider how we structure "work" for our workers, and as individuals we have to really consider how we find balance, happiness and productivity in our careers.

Each day I spend back in the world of "open work", the chances of me going back to being an employee gets slimmer and slimmer.


Sunday, October 6, 2013

Lack of Transparency Is Biggest Bottleneck

If you pay attention to the news, you have probably heard about the technical trouble with the launch of the Affordable Care Act, 50 state marketplaces and the central site.

People across the country are encountering show-stopping bugs in the sign up process, and if you go to the site currently, you get a splash page that states, "We have a lot of visitors on the site right now." If you stay on the page it will refresh every few seconds until, eventually you might get a successful registration form.

I worked at it for hours last night was finally able to get into the registration process, only to get errors several steps in, but eventually got through the flow and successfully registered for an account, scrutinizing the code and network activity behind the scenes as I went along.

There are numerous blog posts trying to break down what is going wrong with the registration process, but ultimately many of them are very superficial, making vague accusations of vendors involved, and the perceived technology at play. I think one of the better one's was A Programmer's Perspective On And ACA Marketplaces, by Paul Smith.

Late last night, the Presidential Innovation Fellows (PIF), led by round one PIF Phillip Ashlock(@philipashlock), set out to try and develop our own opinion about what is happening behind the scenes. Working our way through the registration process, trying to identify potential bottlenecks.

When you look at the flow of calls behind each registration page you see a myriad of calls to JavaScript libraries, internal and external services that support the flow. There definitely could have been more thought put into preparing this architecture for scaling, but a handful of calls really stands out:

The second URL pretty clearly refers to the Center for Medicare and Medicaid Services(CMS) Enterprise Identity Management (EIDM) platform, which provides new user registration, access management, identity lifecycle management, giving users of the Healthcare Exchange Plan Management can register and get CMS credentials. Where the registration.js appears handles much of the registration process.

Philip identified the createLiteEIDMAccount call as the most telling part of the payload and response, and would most likely be the least resilient portion of the architecture, standing out as a potentially severe bottleneck. The CMS EIDM platform is just one potential choke point, and isn't a bleeding edge solution, it is pretty straightforward enterprise architecture that may not have had adequate resources allocated to handle the load. I'm guessing underallocated server and application resources is playing a rampant role across operations.

Many of the articles I've read over the last couple days make reference to the front-end of in using Jekyll and APIs, and refer to the dangers of open washing, and technological solution-ism. Where this is most likely an under-allocated, classic enterprise piece of the puzzle that can't keep up. I do agree with portions of the open washing arguments, and specifically around showcasing the project as "open", when in reality the front-end is the only open piece, with the backend being a classic, closed architecture and process.

Without transparency into the entire stack of and the marketplace rollouts, it is not an open project. I don't care if any part of it is--making it open-washing. The teams in charge of the front-end were very transparent in getting feedback on the front-end implementation and publishing the code to Github for review. It isn't guaranteed, but if the entire backend stack followed the same approach, publishing technology, architectural approaches and load testing numbers throughout a BETA cycle for the project--things might have been different on launch day.

Transparency goes a long way into improving not just the technology and architecture, but can shed light on illnesses in the procurement, contracting and other business and political aspects of projects. Many technologists will default to thinking I'm talking about open source, open tools or open APIs, but in reality I'm talking about an open process.

In the end, this story is just opinion and speculation. Without any transparency into exactly what the backend architecture of and the marketplaces are, we have no idea of actually what the problem is. I'm just soapboxing my opinion like the authors of every other story published about this problem over the last couple days, making them no more factual than some of my other fictional pieces about this being an inside job or a cleverly disguised denial of service attack!


Saturday, October 5, 2013

End To A Very Tough Week in Washington DC

It was a really tough week in Washington DC. We came into the office Monday morning to learn that in addition to facing a possible government shutdown, that about 30% of the workforce in our department at the VA was moving on, due to a change in contract. While these folks had an idea the contract was being renegotiated, they only learned they would be leaving that morning.

These folks had been here two years and held quite a bit of knowledge, so their exit represented a serious knowledge drain for the organization. Sure we will get new bodies, but that's all they'll be until they get up to speed, are warm bodies. This type of contracting has to play a significant role in keeping government being from being as efficient as it could.

Then on Tuesday the news came of the government shutdown. While our part of the VA was declared to have funding through Friday. One by one other groups within the VA, and other agencies across government went silent, with furloughed workers heading home, turning off the lights and agency servers as they left.

As I tried to stay working I was faced with numerous challenges, people gone that I needed to talk to, and websites that I depended on were reduced to splash pages, including frequently used I was dependent on this site for data sets, in my daily work, but more importantly for an upcoming hackathon for veterans in NYC.

With this fresh in my mind, I set out downloading and scraping data from existing VA sites, in hopes of preparing them and publishing via Github, so that hackers have some resources to build web and mobile applications for veterans at The Feast Hackathon in NYC. Not only will the hackathon have limited access to data, VA leadership that was planning on going won't be able to attend, and any press support the VA was going to provide won't be going out. WTF!

Supposedly Monday is our last day, then we face furlough along with the hundreds of thousands government workers. For right now I will just drink a beer, and think deeply about why I'm in Washington DC. More to come...


Thursday, October 3, 2013

I Am lucky I Am Furloughed

I'm lucky. I'm furloughed, but my kid's education doesn't depend on it. I don't live month to month. I'm not at the beginning of my career, and I'm not a single parent. I took a cut in salary and left my family to go to Washington to answer the President's call to try to make government "cool" and efficient, and I work 70 hours a week to do it. I was handed a plum of a project which possibly will save the government a billion dollars in 2014, but the government just cost us all that and more.

We are now like a runner forced to stop mid-stride for one minute and then allowed to run for one minute---we expend more effort and cover less distance than running at a smooth pace. The cost of the shutdown, even if it is reversed tonight, has already been at least a billion, by my own humble reckoning. That assumes of course that you believe and that we have become a great nation because of good government, not in spite of bad government.

As duty demands, I'm going into work now to shut down the servers that host the Beta site for my project. It is the first day I haven't worn a tie in 4 months. One must always look on the bright side.

After I shut down my servers, I have to leave and am forbidden to do any work for the government as long as the shutdown lasts.