Monday, January 25, 2016

Writing Blog Posts That Increases Page Views, But Minimizes Shares

Often times I feel like I'm swimming upstream in my daily work. My work is not ad-based. It is based upon individual human relationships that I have with actual people. Remember people? Those are the carbon life forms we all have to meet with in the real world,Not the virtual representations we interact with online each day. 

Often times I feel like I am working with a balance of the two. I need the page views to justify the relationships that I establish online, but if I do not enforce them offline--they mean nothing! Everything I do occurs as an online transaction, but the money does not ever actually transact unless the relationships are enforced in an offline environment.

The most value I generate daily, exists beyond the page view, after the view, and at the point of receipt...which is impossible to measure in a digital sense. The only thing I can do i stay true to is the delivery, and not get caught up in measuring the view, which quicly disappears before the receipt is ever acknowledged. This is the digital economy. I just do not have the time to wait, I have to get back to producing, and delivering.

Sure, some other party can step in and care about the view, and struggle to understand the receipt, but this is what record labels, publishers, and other middleman have done through the ages. i do not have time for this. I am a creator. I am the source. I generate the exhaust, and leave to the rest of you to determine the value, and fight over the scraps. 

I am maximizing the view, but minimizing the share. Most that read what I write do not want to share. They want to keep for themselves. This an entirely different market than the PPV and PPC world that is being monetized as we speak. How do we incite, rather than extract and monetize? Inciting is a much more difficult challenge than simply getting rich off the emotion that we invoke in others--for me, I need to everything to actually result in action, rather than just a response. 

As i look through the "numbers", I cannot help but think everyone is looking, but holding back when it comes to the actual sharing--I will something I will explore further in future posts.



from http://ift.tt/1NwGwYE

I Wish I Could Select From My Own Templates When Setting Up Github Pages Project Site

All of my public presence runs as hundreds of separate Github projects. Because content, code, and JSON data drives my world, Github + Github Pages is a great way to run my operations. This approach to running my business, allows me to break up my research projects into small bit-size repositories, grouped into organizations, where I can collaborate with partners, and the public at large using Github's numerous social features--I call this Hacker Storytelling.

When I setup a new project, I first setup the master branch, making it public or private, depending on my goals. Then I always setup a gh-pages branch, to act as the public face for each project. Part of the process is always click next, and next through the default page, and templates part of the Github Pages setup flow. I always just choose the default settings, because once I've checked out the gh-pages branch, I immediately replace with my own template, depending on the type of project it is. 

I have around five separate templates that I use, depending on if it is an API portal, and open data project, or a handful of other variations I use to collaborate around API related projects. I wish Github would let me specify my own templates, allowing me to add one or many template repositories, which could be used to spawn my new gh-pages projects. This would save me a couple of extra steps with the setup of each project. 

I'm not sure how many projects, others Github users are setting up, or maybe I am just special snowflake, but I can't help but think others would find this beneficial. As another more advanced feature, it would be nice to be able to have a reseller layer to this, where I could create account level template galleries, where my clients could then setup new organizations, and repositories, all driven my a master set of templates that I control. 

Just brainstorming the possibilities here. It is what I do. If you happen to stumble across this post Github, it would be a sweet new feature, that I hope others would find valuable too!



from http://ift.tt/1ZNMyLG

Friday, January 22, 2016

Overview Of My Knight Funded Adopta.Agency Project

This is an overview of my Adopta.Agency open data project, which was funded by Knight Foundation in the summer of 2015.

Born Out Of President Obama's Mandate
The Adopta Federal Agency, now shortened to just Adopta.Agency was born out of the presential mandate by Barack Obama, that all federal agencies should go machine readable by defaut, and instead of publishing just PDF versions of information, they should be outputing XML, CSV, JSON, and HTML formats. This mandate is still be realized across the US federal government, and has helped put into motion a great deal of open data work, across all agencies, opening data that impacts almost every business sector today.
Worked On Open Data As Presidential innovation Fellow
I personally worked on the open data effor in the federal government, as a Presidential Innovation Fellow, or simply PIF. I worked on open data inventory efforts at the Department of Veterans Affairs, and saw first hand, the hard work going on in government. A lot of very hardworking folks were focused on meeting the mandate, discovering open inventory assets like veteran hospital locations, and veteran program data. The challenge is not finding data at the VA, it is often the process of cleaning up, betting and making available in simple, machine readable formats, that is true challenge.
It Is Not About Technology, But A Process Blueprint To Apply
While in government I saw that technology only got yo so far, and there the biggest challenes are around just having the resources to make valuable data, available as CSV, JSON, and if possible APIs. The government just doesn't have the resources, or the skills to always make this a reality. Adopta.Agency is design to not just be yet another technological solution, it is designed to be a process blueprint, to help passionate government workers, or the average citizen, take already existing, publicly available open data, and move it forward one or two steps. The goal of Adopta.Agency projects is to simply clan up the data and making available on Github, in CSV and JSON formats, as well as publishing a full API when possible. 
You Can "Adopta" Agency, Project, or Data
Adopta.Agency is focused on empowering anyone to target a government agency, and / or a specific project and data, and help encourage more to be done with the data, bring awarness the fact that the data exists, and what is possible when it is available as simple, machine readable resources, available in a public Github repository. Adopta.Agency isn't the next technological solution, it is a blueprint to help anyone conduct the hard work of moving forward the open data conversation for an agency, project, or individual piece of data.
Using Github Repository, With Pages Making Everything Forkable
Adopta.Agency relies on the social coding platform Github, for much of its functionality. The Aopta.Agency blueprint is available as a forkable Github repository, which allows anyone to take the master blueprint, fork it, and transform into their own open data project, following the Adopta.Agency process, without knowing how to program. Github provides an environment that allows for evolving open data, using some of the same approaches used to push forward open source software development, but focused on making open data more accessible, and usable via a Github repositories.
Single YAML Checklist That Is Easy To Follow
Github uses YAML as a machine readable format for much of its content or data management, and Adopta Agency leverages this, making make the Adopta.Agency blueprint process accessible as the very human friendly data format. Everything within any Adopta.Agency project is editable in each project's central YAML file, allowing you to edit everything from the project details, to links to each of the APIs, and open data files. YAML makes each blueprint machine readable, but in an easy to folllow, single process driven checklist, that anyone can follow without needing to understand how to program or read JSON.
Defining A Clear Objective For Adopta Projects
Each Adopta.Agency project has  its own objective, targeting a single agency, project, or specific set of data. Using the central YAML file, each project owner can edit the title, description, url, tags, and other details that articulate the objective of the project, making the goals as simple, and clear as possible. This process of having to craft a concise statement which describes the project is something many existing open data efforts suffer from, government workers just don't have the time or awarness to craft.
Focus On Cleaning Up Data And Making Available As CSV And JSON
The primary function of any Adopta.Agency project is to target some specific data, clean it up, and make it accessible via Github as CSV and JSON files. This is something many government agencies also do not have the time or expertise to make happen, and could use the help of citizen data activists, who are passionate about the areas of society where open data can make an impact.
Share Data and Content As Public API When It Makes Sense
The frist step of Adopta.Agency projects is around cleaning up open data, and making it available as simple CSV, and JSON, with the second step, focused onmaking these formats available as an interactive API (if it makes sense). While a more advanced component of any project, an API can be easily deployed, without any programming experience-using modern, cloud API management solutions. 
Showcase What Is Being Built On Each Adopta Project
Open data efforts are all about providing actual solutions, and Adopta.Agency is focused on making sure small, meaningful elements like interactive widgets, visualizations, and other tooling are provided. Each project should have a handful of tools that put data made available vai each project to work, helping exand the understanding in the topic area, showcasing the value created, and potential of open data.
Highlight The Team Behind All Of The Work
All of this open data work does not happen without people, and showcasing the open data, and tooling is important, but it is also important to showcase the team behind. The Adopta.Agency blueprint has built in, the ability to showcase tooling built, as well as the people and personalities behind the hard work going on.
Share What Tools Were Used In Each Project
The engine behind the Adopta.Agency blueprint process is a carefully assembled and developed toolbox. The Adopta.Agency toolbox includes Github as the Platform, Github Pages for the free project hosting, Google Spreadsheets as the data store, API Spark for API Deployment from the Google preadsheets, a CSV Converter, a JSON Editor Online, and D3.js for vsualizations from CSV and JSON files. All this can be run for free, unless projects are of a larger scale, or require private repositories for collaboration--if kept public, Adopta.Agencies should not cost project owners any money.
Answer Any Questions Up Front
Each project that uses the Adopta.Agency blueprint, also has a frequently asked question section, helping answer any obvious questions someone might have about a project, while also forcing each project owner to think through common questions. The FAQ section encourages project stewards to regularly add questions, making it any easy resource for users to get up to speed around a project.
Offer Support Through Github Issues
One of the most vital aspecs of using Github, is the usage of the Github issue management features, for establishing a feedback loop around each Adopta.Agency open data project. Think Facebook, for each of the open data projects, encouraging conversation, questions, a valuable feedback that helps move any project forward, making the work a social endeavor.
Tell The Story A A Jekyll Blog
One exteremely important aspect of the Adopta.Agency lifecycle is telling the story of what you do. This content generated as part of this portion of operations provides a rich exhaust that is indexed by search engines, and amplified on social media, helping increase awareness of the open data work. This adds another missing dimension to the open data process, which is missing with many of the existing government efforts. 
I Applied Adopta.Agency To The White House Budget
To develop the base prototype for the Adopta.Agency project, I applied it to something I have very passionate about, the budget for the federal government. I feel pretty strongly, that having the existing spreadsheets for the US budget, available as JSON and CSV will help us better understand how our government works, and will help us tell better stories about spending using visualization, and other tooling. 
Next I applied To The Department of Veterans Affairs
To continue the work I had already been doing at the Department of Veterans Affairs, I wanted to push forward the Adopta.Agency blueprint by applying to very importan, veteran related data. I got to work cleaning up population focused data, healthcare expenditures, insurance programs, loan guarantees, and veteran deaths. I have since hit a significant data set, breaking down VA spending by state, which I will break out into its own project. 
My Partner Did It With My Brothers Keeper Data
With a prototype blueprint hammered out, my partner in crime Audrey watters was able to push the blueprint forward some more, by applying to My Brothers Keeper data, which provides valuable data on men & boys of color. She is still working through the numerous data sets available, and telling the story along the way, which has resulted in some very interesting conversations around how this work can be expanded, in different directions, like collecting the same data for women & girls of color, and possibly other ethnic groups.
Attracted Someone Passionate About Fisheries
As I was telling the story about building the Adopta.Agency blueprint, an individual contacted me, to see if we could possibly apply Adopta.Agency to the area of commercial fisheries. To support the request, I have begun a project that is focused on NOAA fishery industry data, and begun pulling available data sets, and started the process of cleaning up in Google Sheets--work that will continue. 
What Were The Challenges?
During the six months of work there were numerous challenges identified, beginning with stigma around Github being a very difficult platform for non-developers to use, and concerns around the technical skills needed to work with JSON, and APIs. Additional concerns around interests in making this type of change in government, and whether the average citizen has the passion to make this work. Overall people we spoke with, felt it was a viable approach, something they could overcome challenges, if there were proper support across all projects, and the Adopta.Agency community. Overall there will have to be a certain amount of trust established between data stewards, the public, and government agencies involved.
What Is Next For Adopta.Agency?
We felt Adopta.Agency prototype was a success, and will be continue to work on projects, as well as work to amplify the approach to opening up data. While a lot of work is involved with each project, the simplicity, and journey users experience along the way really resonated with potential data stewards and project owners.  We have a number of people who engaged in conversations throughout the prototype work, and will be engaging these groups, to continue with the momentum we were able to establish within the last six months.
Looking At Applying To International Aid Work
Our work on Adopta.Agency has also opened up a conversation with Sonjara, Inc, around a handful of additional potential grants, where we could apply the open data and API blueprint in the area of foreign aid, and government spending at the international level. Our two groups have had initial conversations, and will be targeting specific funding opportunities to help apply the existing work, within this much needed area of government open data. 
Opening Up The Value Information At Privacy Rights Clearinghouse
Another conversation that was opened up as part of the Adopta.Agency project, was with the Privacy Rights Cleainghouse, which is a steward of privacy rights educational content, and security breach data since 2005. The organization is very interested in making more of its rich data and content availble via APis, allowing it to enrich other web, and mobile applications. This would be an exciting area to begin moving Adopta.Agency beyond just government data, and help non-profits like  the Privacy Rights Clearinghouse.
Adopta.Agency For The 2016 Presidential Election
My primary supporter 3Scale, who supports my regular API and open data work since I was a PIF, has epxressed interest in sponsoring the Adopta.Agency blueprint to be applied to the 2016 presidential election. The objective would be to target open data, that would prove valuable to journalists, bloggers, analysts, and other active participants in the 2016 election. We are in initial discussion about how the process can be applied, and what funding is needed to make it a success. 
Improving Upon The API Evangelist Portal Template
Some of the work included in the Adopta.Agency blueprint has bee included in the next version of an existing open API portal template hosted on Github. With the open licensing of the Adopta.Agency work, I am able to easily integrate into any existing open or commercial project that I work on, which provides a forkable, easily deployable developer API portal for launching in support of open API efforts. The Adopta.Agency blueprint has provided the work, some much more non-developer friendly ways of handing API portal operations, which can be applied across open API efforts across numerous business secotrs. 
Target More Agencies
With Adopta.AGency, we will continue targeting more government agencies. We have a list of interested individuals with passions for opening up government agencies, ranging from NASA to Department of Justice policing data. We have a short list of over 25 federal government agencies to target with Adopta.Agency projects, the only limitation is human, and financial resources. 
Target More Data
Along with the federal agencies we will be targeting, some of the conversations our Adopta.Agency work has opened up, push the model beyond just the federal government. Projects focusing on election data could span both public, and private sector data. The Privacy Rights Clearinghouse will do the same, pushing us to make more data available, for consumption across all layers of the economy. 
Target More Grant Funding
Our conversations with Sonjara, Inc, and Privacy Rights Clearihouse, are looking for specific grant funded projects where we can apply the process developed as part of Adopta.Agency work. In 2016, we are looking to target up to five new grant opportunities, seeking to move the entire project forward, as well as potentially spawn individual open data projects, expanding the Adopta.Agency community.
Evolve the Blueprint's Reach
While pushing forward the Adopta.Agency blueprint will occur in 2016, the most significant portion of the project's evolution will involve reaching more individuals, building more relationships, encouraging more conversations, and yes, opening up more open data across the government, and other sectors of the economy. In 2016, our goal will be to focus on evolving the reach of Adopta.Agency, by continue to apply the blueprint to new projects, and working with other passionate individuals to do the same, evolving the blueprint's reach, and its impact. 

This overview is driving my presentation at Knight Foundation demo days, to wrap up the grant cycle, but the project will be ongoing, as this was just the seed funding needed to make it a reality.



from http://ift.tt/1QpucQt

Thursday, December 10, 2015

Machine Learning Will Lead When It Comes To Algorithmic Rent-Seeking

I've long been searching for a term to describe a concept that I see across the API space, where API providers, API consumers, and API service providers take more from the space, than they give back. This concept shows up across the API space in many forms, preying upon the open nature of the open nature of the API sector, and looking to extract value, and generate revenue on the backs of the hard work of others. 

After listening to the audio book version of the Price of Inequality by Joseph E. Stiglitz on a recent drive, I re-learned the term rent-seeking. A phrase I had heard before, but had not fully grasped its potentially wide meaning, when applied to financial products, natural, and other types of common resources--Google gives me the following definition:

Rent-Seeking When a company, organization or individual uses their resources to obtain an economic gain from others without reciprocating any benefits back to society through wealth creation.

Rent-seeking is common practice within the API layer of the Internet. When you consider the concept, and apply to the unlimited number of digital resources that are being exposed via APIs, you begin to see unlimited possibilities for rent-seeking, when transparency is not present. Now that I have the the seed planted in my head, and a phrase to apply, I will be exploring this concept more, but I couldn't help but think about one of the biggest offenders I'm seeing unfold across the space--machine learning.

Don't get me wrong. There will be a lot of machine learning solutions that will help move our digital world forward, but there are also lots of smoke & mirror machine learning and big data solutions, which will purely be seeking resources to mine. As an API provider, it can be easy to focus on the individual API consumers who are bad actors, when in reality you should be thinking much bigger, and the potential partners or service providers, who can consume large amounts, or even all of your resources.

This dark side of machine learning that I am focusing on will include the artificial intelligence, machine learning, big data, and analytics providers who will be selling you magical solutions and lofty promises, which will require the ingestion your valuable data, content, and algorithmic resources, before they can return said magic to you. Some of these solutions will offer more value than they consume, but many will not. If you are the steward of a valuable corpus of data, content, and have crafted valuable algorithms, many large companies will be approaching you in coming months and years, interested in helping you generate insights from these valuable resources.

Machine learning isn't bad. I just want to help you be aware that there are providers who just want to mine your resources, looking to add value to their own data, content, and algorithms, or possibly even just passing on, and selling it directly to other providers. Machine learning will be big business, and something that will dramatically incentivize rent-seeking behavior hidden behind an algorithm. I'll be telling stories of other rent-seeking behavior that I see in the API space, but at this point I feel like machine learning will lead when it comes to algorithmic rent-seeking in the API economy.



from http://ift.tt/1QhQAMi

Friday, December 4, 2015

Not Every Band Needs To Sign With A Major Record Label Or Become An Orchestra

I used to work  in the music industry back in the 90s, and sometimes the tech sector reminds me of this time hustling in the music space. There are definitely a lot of things that are different, but the business of the tech space, often reminds me of some of the business currents that exist in the music industry. 

When you understand the lay of the land in the music industry, there are three distinct spheres of operating:

  • Independent - Small, successful bands who build a solid audience, and are able to make a living.
  • Label (Failure) - Bands that feel the need to be the next Led Zeppelin, and sell their souls.
  • Label (Success) - The small portion of bands who actually find large scale success, and get all the attention.

Most of the work I did in the music promotion space, existed within the world of independent or label (failure), with only one semi successful band, that had a brief flash of success, and I'd still actually put them in the label (failure) bucket, now that I think about it. There is a lot of money to be made as a promotion company selling services to bands who think they are going to make it big--they tend to sell everything, beg, borrow and steal to pay you. (sounds familiar)

There are some independent bands understand that they can actually make a decent living, making music through a mix of selling music, merchandise, and touring. Most other bands think they need to sign with a major label, when in reality the labels are just gambling and playing the numbers on the bands they sign. Only a small percentage of any record labels catalog will make them big money, the rest--marbles on the roulette table. Smart bands know they can do well, by playing good music, and running the business side well, where the not so smart bands always have to be famous, with only a small fraction ever making it--the rest burnout and fade away.

In the end, not every band needs to sign with a major record label, and not all bands needs to aspire to be an orchestra. This reminds me of tech startups, in that not all startups need to scale and sign with a VC, and definitely do not need to scale and become the next enterprise organization.



from http://ift.tt/1lDqczg

Thursday, December 3, 2015

If I Cannot Scale It On My Own Or Use A Service Provider I Do Not Scale It

I have had numerous startup ideas, half to full baked over my 25 year career, with just two I consider to be a success. All of them taught me massive lessons about myself, business, and building relationships with other people. All of this experience has gone into my invention of the API Evangelist persona, and there some hard learned lessons that dictate how I grow and scale what I do.

I do not scale anything I cannot scale by developing a new API, or putting a software as a service provider to use (which I can afford). The concept of hiring something does come into the picture from time to time, but always quickly fades. I feel this provides me with some very healthy constraints, that pushes me to really think through what scale is to API Evangelist. 

I have a huge laundry list of things I would like to do, but because it has to wait until I have the time, and energy to do, much of it never gets done, and that is a good thing. If it is critical, I will do it. If I feel it will help the API community at this point in time, I will do. If I can convince someone else to do, I will. ;-) Sometimes I just wait until someone else does it, and purchase their service. 

I am not saying this is an approach all companies should follow, I'm just sharing what approach works for me. My growth in site traffic, blog posts, industry guides, and revenue, has all happened slowly and steadily over time, in sync with the amount of work I do, and how much I am able to scale my own operations. This post is just a reminder for myself, to not get frustrated with my massive todo list, and scaling of API Evangelist has occurred, and it will never come as fast as I would like. 



from http://ift.tt/1TDAbQP

Tuesday, December 1, 2015

All My Code Snippets Now Live As APIs Which Makes Them Way More Discoverable

Historically, I have a "working" Amazon EC2 micro instance, that is my playground for writing code. This is where I begin all my coding adventures, and often is where most of them end up living forever. I have a lot of great work here, that I easily forget about, shortly after pushing the ideas out of my head, and on to the server via my IDE. 

I have never had a way to index, search, or discover what this wide array of coding projects that I have produce--if a piece of code never gets a link in my administrative interface, it will often be lost forever. Sometimes I will write some code to support an idea, only to find a folder adjacent to it that does the same thing. Doh! I forget I ever did that!

With my new approach to managing my API stack as a series of Github repositories, the ultimate goal of any coding project, is to wrap it as a simple API. As I wrap up any little snippet of code, the final step is always to publish it to an existing repo, or create a new repo, and make sure it is available as a simple API endpoint.

As an API endpoint, in my API stack, every piece of code becomes discoverable via the APIs.json file, which I can browse via the Github Pages portal, or programmatically via its JSON. I'm sure some endpoints I may never use, but at least its available in my toolbox as an API, and who knows I may eventually put it to work, or evolve it as part of some future idea.

I'm already seeing a significant increase in my own operational efficiency because I have my earlier code toolbox available as an API stack.



from http://ift.tt/1InkoUZ

Friday, November 27, 2015

Update 265 Pages, and 175 Links On My Network To Support The Swagger to OADF Shift

I have written 265 separate posts, across the API Evangelist network about Swagger, in the last three years. To reflect the recent shift of Swagger into the Open API Initiative (OAI), and the specification being reborn as Open API Definition Format (OADF), I wanted to update all the stories I've told over the years, to help educate my readers of the evolution, and provide the most relevant links possible.

Using my Linkrot API, which helps me manage the links across my network of sites, I've identified all the pages with Swagger relevant links, and made sure they are updated to point at the most recent material. I've also added a banner to each of the 265 posts, helping educate readers who come across these posts, regarding the transition from Swagger to OADF, and help them understand where to find the latest information.

My network of sites is meant to be my workbench for the API space, and provide the latest information possible about what drives the API sector. It is important to me that the information is as accurate as possible, and my readers stay in tune with the shifts of the APIs space, and where they can find what they need to be successful.

In the end though, all of this is really just business. 



from http://ift.tt/1Q3sPWC

Tuesday, November 24, 2015

For A Brief Moment We All Had Swagger In The API Space

For a brief moment in the API space, we all had swagger. When it all began, we were working towards interoperability in a digital world where none of us actually wanted to work together, after being burnt SOA bad before. We all pretended to be working together, but in reality we were only operating on just a handful of available VERBS, until one person came to the table with a common approach that we could all share--something that would bring us together for a brief moment in history.

It won't work said the restafarians! It doesn't describe everything, said everyone else. However, for those who understood, it was a start. It was an approach that allowed us to easily define the value we were trying to deliver in the API space. Something that turned out to be a seed for so much more, in a seemingly rich API world, but one that in reality was toxic to anything that was actually sustainable. Somehow this environment, one individual managed to establish a little back and forth motion, that over time would pick up momentum, setting a rhythm everyone else could follow when defining, sharing, collaborating and building tooling online.

We all had a little swagger...

For a brief moment, we were all walking together, blindly following the lead that was given to us, while also bringing our own contribution, adding to the momentum day by day, month by month. Somehow we had managed to come together and step in sync, and move in a single direction.

This has all come to and end, as the beasts of business have had their way. There will be no open--just products that call themselves open. There will be no tune for us all to march to, whether you are developer or not. We just have acronyms that only hold meaning to those within the club.

The fun times are over. The rhythm has ceased. Money and business has won over community and collaboration, but for a brief moment we all had swagger in the API space.



from http://ift.tt/1P5fuym

Monday, November 23, 2015

WHy You May Not Find Me At The Bar After Tech Events

When you see me at conferences, you might notice that I am often very engaged while at the event. However after the event lets out, and everyone heads off to the bar or pub, you may not find me tagging along anymore. You see, I am finding it increasingly hard to be present, because of one thing--my hearing. 

You may not know this, but I am completely deaf in my left ear, and only have around 30% left in my right ear. This was the findings in a hearing test I had done in 2007, and I'm assuming by 2015 I have lost even more. Historically I have done pretty well, with a mix of reading lips, and piecing together what words I do hear, but this year I am finding it increasingly difficult to make things work.

As I live longer with my hearing impairment, I find one side effect, is that I tend to feel sounds more than I hear them, and when I'm in loud bars, I tend to feel everything, and hear nothing. This results in me feeling and hearing the loud noises, but actually understanding none of what people around me are saying to me. Overall this is really overstimulating, and after spending a day at a conference is can be very difficult for me, leaving me able to handle no more than maybe an hour or two in loud environments.

I also noticed a couple times recently where people were talking to me, usually on my left side and I did not notice, resulting in confusion, but then when I hear only portions of conversations, and I seem uninterested (as I do not know what is going on), people seemed a little offended--if this was you, I am truly sorry.

I understand not everyone who hangs out with me at events will read this, but I wanted to write it anyways, and gather my thoughts. I will be ditching out of bars earlier than I have in the past, and I'm hoping the folks who really want to hang out with me will join me in a more quieter setting, where I can hopefully be engaged a little more.

Thank you for understanding.



from http://ift.tt/1Npb2KA

Monday, November 2, 2015

Making Sure My Stories Are Linked Properly

When I craft a story for any of my blogs, I use a single content management system that I custom built on top of my blog API. I'm always looking to make it more valuable to my readers by providing the relevant links, but also make it more discover-able, and link-able within my content management and contact relationship management system.

I keep track of how many times I reference companies and people within articles, and the presence of Twitter accounts, and website URLs is how I do this. So when I am writing articles, it is important that people and companies are linked up properly. Here is an example from a post I just wrote, for the API consumption panel at @APIStrat (it hasn't been published yet).

I have tried to automate this in my CMS, so when it sees someones Twitter account, or a company name that already exists in my system, it recommends a link. However it is still up to me to ok the addition of links to Twitter handles, and company names. I do not like this to be fully automated, because I like to retain full editorial control.

I am just sharing this, so that it gets baked into my operations, and I remember to use the link system more, but also just acknowledge how much work it is to make all of my storytelling work, but that ultimately it is worth it. Formally telling on my blog is how I make sure all of this continues to be a reality across my operations.



from http://ift.tt/1KVlxgY