The API Evangelist Blog

This blog represents the thoughts I have while I'm research the world of APIs. I share what I'm working each week, and publish daily insights on a wide range of topics from design to depcration, and spanning the technology, business, and politics of APIs. All of this runs on Github, so if you see a mistake, you can either fix by submitting a pull request, or let me know by submitting a Github issue for the repository.


Holding Little Guys More Accountable Than We Do VCs And Bigcos

I spend a lot of time defending my space as the API Evangelist. I’ve had lengthy battles with folks in the comments of my blog for defending women, charging for my services, being pay for play, having secret agendas, and much more. I’ve had my site taken down a handful of times (before I made static on Github Pages), because I stood up for my girlfriend, or just joined in on the wrong (right) cause. When you have been doing this as long as you have, you see the size of the armies of tech bros that are out there, waiting to pounce. It is why I don’t share links to Reddit or Hacker News anymore. I stopped sharing to DZone for same reason, but they’ve since cleaned up their community, brought in some great talent (including women), and I’ve started syndicating there again.

Most recently I had someone accuse me of pay for play, even though there was no disclosure statement present, and I have had two other folks accuse me of having an agenda set forth by my partners. If you know me, you understand how ridiculous this is, but this never stops the waves of young men finding their way to my site, and pointing the finger. What I find fascinating about this is these men never point the finger at Techcrunch, or call out API startups (and bigcos) for colluding, sponsoring, pay for play, and the other bullshittery that goes on. They go after folks who are outspoken against the machine, and never after the machine itself. If people don’t like something I said, or what someone I’m writing about is up to, they tend to go after me, without spending any time getting to know me, looking at my background, or looking at the seven years of storytelling on my site–there is a single page to do it!

The majority of these folks rarely ever have their own blog, name and pictures on their Twitter or Github accounts, and have a minimum digital footprint. They don’t understand what it is like to be out in the public sphere sharing their ideas on a daily basis, and prefer hiding behind anonymity while they go after the little guy. Again, rarely do you see them going after the Silicon Valley, or larger corporate machine–too big of target. I have special Twitter lists I add these folks too, and usually when one of them pisses in my Cheerios I’ll spend some time seeing what the others are up to, and most times it is more of the same. Trolls. It is easy to dismiss these folks as part of the wider illness we see infecting the Internet, but much like sexism and racism, the real damage is done by the waves of complicit individuals and platforms who support them, chime in, pile on, and maintain an environment where this type of behavior is normalized.

For the most part I’m a diplomat. I tend to perform API Evangelist in a supportive, friendly tone. I started API Evangelist on this tone with intent. I saw the meanness in the space, fully grasped my assholeness as a VP of technology within an enterprise group, and wanted to do something different, and help people feel like APIs were approachable and friendly. I still support this tone, but this week, I’m performing a more offensive version of API Evangelist. One that is calling y’all out for the bullshit that you do each day online in this business, or condone and support daily in your complacency. I’m not really as unhinged as these posts make me out to be (close), I’m just exploring the illness that is rampant in the space, and pushing back a little. There is a significant amount of cover afforded to VCs, bigcos, and the startups who support their message, and for the handful of us out in the open, we often become a much easier target, when in reality there should be much more accountability for those that are responsible for the illnesses in the tech sector, ad are often out of site, but actually have their hands on the puppet strings that make things go around.

Note: If my writing is a little dark this week, here is a little explainer–don’t worry, things will back to normal at API Evangelist soon.


Your Microservices Effort Will Fail Just Like Your API And SOA Initiatives

You are full steam ahead with your microservices campaign. You’ve read Martin Fowlers blog post, and talked about the topic with your team for the last six months. After a couple pilot projects, you are diving in, and have started decoupling the monolith of systems that you depend on to operate your business each day. You have mapped out all the technical details of all code, and backend systems in play, and have targeted about 30% of existing systems for reworking using a microservices strategy. Yet, despite all your planning, your microservices effort will still fail just like your API efforts, and its predecessor the SOA initiative did.

Despite all your research, planning, and eye for the technical detail in about 7 months everything will begin to slow, and by month 10 you will begin to get very, very frustrated. You see, you haven’t included any of the human element in your planning. You have thought about the business, cultural, legal, financial, and other non-technical aspects of operating your systems. You’ve done a fine job of developing a strategy for decoupling your monolith database, but you haven’t invested any time in what it will take to educate and bring up to speed all the business users, support, QA, and other folks you will need to have up to speed to actually make all this work. You are so blinded by technological trends, you forget to spend time talking to the people.

You are feeling good right now because you have surrounded yourself with yes men. People who are in perfect alignment with what you are doing. They think like you, and have drank the same kool-aid. However, once you start implementing your grand strategy to make every smaller, more micro, you will begin to see friction. Sales folks will begin to see you as a threat as you fragment their landscape, and be seen as taking a piece of their action. Business users will begin to freak out, because while they had to battle with IT before to get access to resources, now they have deal with man many smaller teams, with even more unfriendly people to talk with–you just increased the scope of what they need to know. Many will see your work as just increasing complexity, and expanding the landscape of their work, in an already hectic world. As this perception increases they will find ways to throw a monkey wrench into your finely tuned plan.

Your SOA rollout failed because it was to heavy handed, dictated everything, giving vendors too strong of a voice, while also strengthening IT too much in the eyes of business groups. Your API efforts underestimated the fatigue management and businesses users experience from SOA, and your team never really believed in API anyways–they just thought it was a fad, trend, and pipe dream. Now your team is sold on it being all about offering up smaller services, and you are paying lip service to fact that “service” should be including more than just the technical, but you actually haven’t done the heavy lifting. You don’t have your training, support, documentation, QA, testing, discovery, terms of service, security and other critical aspects of each service in place, let alone the comprehensive view of how this works across the thousands of microservices you are planning. Let’s get real, your microservices plan will fail just like API, and SOA. You are just too in love with the technical, and willfully blind to the human side of all of this to ever make it work.

Note: If my writing is a little dark this week, here is a little explainer–don’t worry, things will back to normal at API Evangelist soon.


The Reason For Your API Security Breach: You Did Nothing

You just got three separate calls, and countless emails alerting to the fact that you just had a major security breach. You don’t know the extent of the damage yet, but it looks like they got into your primary customer database via the APIs you depend on for all your mobile applications. You are sitting in your office chair, sweating, and trying to figure out how this happened. I will tell you, it is because you have done nothing. You have de-prioritized security at every turn, resulting in an open door for any hacker to walk through.

Not only have you done nothing, you actually worked against anyone who brought up the topic of API security. You would respond: We don’t have the time. We don’t have the budget. We don’t have the skills. You never listened to anyone of your staff, even that security lady (what was her name?) you had hired last year, and then resigned, with a letter containing over 25 security holes she had been trying to take care of, but because of the toxic environment you’ve created, she was unable to do anything and moved on. You have created an environment where anyone who brings up security concerns feels persecuted, and even that their job is in jeopardy, making “doing nothing” the standard mode across all operations.

You have eight separate mobile applications which all use APIs, and all of them using the customer database in question, which also stores credit cards, which is in violation of your PCI compliance–you know, those forms you sign off on each year? You felt these mobile APIs were secure because they were hidden behind your mobile applications, and your developers had given you a application security scan report last year. In this situation you would love to blame these developers, but all roads lead to you when it comes to responsibility for this situation. You begin to feel sick to your stomach thinking about the 345,633 credit cards and other PII that was leaked. You know the numbers, because you have real time reports on how many customers you have. You just don’t have any real time reports for anything to do with security.

API security was everyones first concern when you first pitched these projects starting back in 2010, and you have managed to run for seven years without any major incidents. Each year you have just been more emboldened in your do nothing strategy, but everything has caught up with you now. What do you do? You don’t have a breach action plan. You don’t have sort of protocol for this type of situation, despite saying that you did several times in meetings. You better get to work dealing with the technical fallout from all of this, because it will last weeks, if not months. Then you get to also start dealing with the business, legal, and political fallout from this breach. Hey, there is a bright spot. The chances are pretty high you might not even have a job after all of this is pretty high as well. Enjoy!

Note: If my writing is a little dark this week, here is a little explainer–don’t worry, things will back to normal at API Evangelist soon.


Extract As Much Value As You Can From Your API Community And Give Nothing Back

You are in a sweet spot. You got a fat six figure job in the coolest department of your company, building out your API platform. You have a decent budget (never as much as you want) to throw hackathons, run Google and Twitter ads, and you can buy schwag to give away at your events. Sure there is a lot of pressure to deliver, but you are doing pretty well. All you gotta do is convince 3rd party developers to do thing with your companies APIs, develop web, mobile, voice, and other applications that generate buzz and deliver the return on investment your bosses are looking for.

It is all about you and your team. Let’s get to work growth hacking! Attract as may new users as we can, and convince them to build as much as we possibly can. Let’s get them to develop SDKs, write articles for us on their blog, speak at our events, favorite things on hacker news, and whatever activities that we can. Your objective is to extract as much value from your API operations as you possibly can, and give nothing back. Expect developers to work for free. Expect your hackathons attendees to come up with the next great idea, build it, and hand it over to you for very little in return. This isn’t a partnership, this is an API ecosystem, and your team is determined to win at all costs.

Your API isn’t a two-way street. All roads lead to your success, and your bosses getting what they want. You don’t care that 3rd party developers should be compensated, or that they have any rights to their intellectual property. The 5% of them that successfully build applications, we will offer them a job in exchange for it, or we’ll just replicate it internally, decrease their rate limits, and increase their error rates so that they can’t compete. Sure you want people to still feel inspired, but not enough so that they’ll ever be able to sustain their applications–the only sustainable application around here will be owned by the platform. After all, this is all just business–it is nothing personal.

Note: If my writing is a little dark this week, here is a little explainer–don’t worry, things will back to normal at API Evangelist soon.


The Reason Your API Sucks Is There Are No Women And People Of Color On Your Team

I know that many of you are insecure about your APIs. You aren’t transparent with your numbers, and many aspects of your API operations. You are stressed out because you built it, and nobody came. You were able to artificially inflate your new user numbers, and API calls through paid campaigns, and bot activity, but nobody is using it, and you just can’t figure out why. You are asking yourself why don’t anyone see the value your API brings to the table? Why aren’t you getting the traction you thought you would get when you first came up with the idea?

You aren’t getting any traction with your API because it sucks. It was a bad idea. Nobody wants it. It sucks because it doesn’t provide any value in a highly competitive space, and you naively thought that if you built it everyone would come. You probably have a number of people around you telling you that your idea is great, and the API will be a hit. You’ve probably had this most of your life, and are used to people telling you that your ideas are great. It is why you feel so uncomfortable around anyone that is critical, because you just aren’t used to being told you that your ideas are dumb. It hurts your feelings.

This is why you surround yourself with people who look, act and think like you do. It is why you don’t think women and people of color have the skills needed to work on your dumb, useless ideas. You don’t have the balls to surround yourself with anyone who doesn’t think like you. If you did, you might have been told early on that your idea wasn’t worthwhile, or you might have gotten additional feedback or criticism that would have helped shape it into something useful. I know this is hard for you to hear, and you think you are really smart, and you probably read one of my cheerleader blog posts about how great APIs are, but in reality, if it doesn’t have any use in the real world nobody will care.

Don’t make this mistake with your API. Make sure your API team is as diverse as possible, and then work hard to make sure your community also becomes as inclusive you can. It is a lot more work to do thing this way, but it will pay off, and it will save you from potentially launching an API that completely sucks like this again, and doesn’t have any reach beyond just the echo chamber you exist in.

Note: If my writing is a little dark this week, here is a little explainer–don’t worry, things will back to normal at API Evangelist soon.


Your Internal Dysfunction Is Not My API Problem

You hear a lot of discussion regarding public API vs private API. From my vantage point there is only web APIs that use public DNS, but I find that folks hung up on the separation usually have many other hangups about things they like to keep behind the firewall, and under the umbrella of private. These are usually the same folks who like to tell me that my public API stories don’t apply to them, and when you engage these folks in any ongoing fashion you tend to find that they are looking to keep a whole lot of dysfunction out of view from the public, and all the talk really has very little to do with APIs.

I spend my days studying the best practices across the leading API providers, and understanding what is working and what is not working when it comes to operating APIs. I have seven years of research I’m happy to share with folks, and I entertain a number requests to jump on calls, participate in webinars, do hangouts, and go onsite to do workshops and talks. I’m usually happy to do these things, and when it is a government agency, non-profit organization, and sometimes higher educational institutions, I am happy to these things at no charge. I like sharing what I know, and letting folks decide what they can use from the knowledge I’ve aggregated.

When I engage with folks I expect folks to not always be trusted–they don’t know me. However, I’m always surprised when folks think I have an agenda, looking to change them too fast, that I’m trying to shove something down their throat, and disrupt their position. First, I am always being invited in. I’m not a sales guy. I do not have anything to sell you except for my knowledge (which revenue goes to just goes back into doing what I do). There is regularly the quiet IT person who has carefully defended their position in the company and what they do not know for years, telling me this is all horse shit, and who do I think I am. Or the administrator who is always running around with their head cut-off, and feels the need to tell me that I do not understand them, and there is no way that any knowledge that I have is at all applicable to them–what the hell am I even doing here?

Hey, I’m just looking to share. If you don’t want it, I’m happy to tell me stories elsewhere, you don’t have to jump my shit. I’m genuinely trying to help, and share stories about what is working in other situations. I’m not looking to take your job, or make you do things you don’t have time or capacity for. Your dysfunction is not my API problem. I’m guessing this dysfunction is probably why you don’t have any sort of public API presence, and why your existing internal and partner APIs are not everything you’d like them to be. I am just going to quietly pick up my things and go back to what I was doing, I’m sorry that I’ve disrupted your chaotic world, and took up your time. I most likely won’t be back, but if you ever need to learn anything about APIs, and need a break, you know where to find me.

Note: If my writing is a little dark this week, here is a little explainer–don’t worry, things will back to normal at API Evangelist soon.


API Rants vs. API Research

I know many of you read my blog for the valuable nuggets of information extracted from my regular research into the world of APIs. I spend a great deal of time sifting through very boring, mundane, and sometimes valuable API related goings on. I have managed to muster the energy each week for the last seven years to sift through thousands of feeds, Tweets, and Github repositories looking for nuggets of API wisdom, best practices, and sometimes bad practices, to share here on the blog. Some weeks I find this an easy task, something I really enjoy the process, but most weeks it is a chore–some weeks I don’t give a shit at all. This is one of those weeks. Well, last week was too, but instead of NO blog posts, this week I’m going to shift things up so that I can get on track.

I have had series of folks piss in my Cheerios lately, regarding the free and unpaid work that I do, and as a result, I find myself without any writing mojo for the second week in a row, and not caring about sifting through all your API startup blah blah blah. 1/3 is about being rude and bro assholes, 1/3 of it is that APIs are boring and y’all have no imagination, and 1/3 of it is I’m a mentally ill asshole. The result of all of this is that you get a week full of API rants, instead of API research. Sooooooo, if this side of my personality turns you off, I recommend you tuning out API Evangelist for at least a week, until I feel better, and find the energy to do what it is that I do. There will still be plenty of substance in my posts, and things will still be VERY API related, and something that you can apply in your regular work, I will just be taking off all filters, and it will be written directly from my brain instead of my API research.

I work really hard doing what I do. I work seven days a week, with no regular paycheck or institutional support. I do make a decent living from partners who step up to support me (and pay their bills), but it is a hustle to keep doing what I do. It always grinds on me when folks with good jobs attack me like I have some sort of agenda in their game, or the no name bros come out of the wood work to question my credibility when they know nothing about me. I spend a lot of time filtering out what I’m really thinking, sticking to the facts, helping make my nuggets of API research wisdom more digestible, but this week ain’t one of them, so hold on. Once I feel better, I will go back to normal, and make sure I point out the good that is going on in the space, but in the current mode y’all ain’t worth it, and I’m going to unload and rant for the next 30 or 40 posts. Normally I’d keep this kind of stuff just on kinlane.com, but since it will all be API focused, I’m going to give you the uncensored version here on apievangelist.com. Enjoy!


Disaster API Rate Limit Considerations

This API operations consideration won’t apply to every API, but for APIs that provide essential resources in a time of need, I wanted to highlight an API rate limit cry for help that came across my desk this weekend. Our friend over at Pinboard alerted me to someone in Texas asking for some help in getting Google to increase the Google Maps API rate limits for an app they were depending on as Hurricane Harvey:

The app they depended on had ceased working and was showing a Google Maps API rate limit error, and they were trying to get the attention of Google to help increase usage limits. As Pinboard points out, it would be nice if Google had more direct support channels to make requests like this, but it would be also great if API providers were monitoring API usage, aware of applications serving geographic locations being impacted, and would relax API rate limiting on their own. There are many reasons API providers leverage their API management infrastructure to make rate limit exceptions and natural disasters seems like it should be top of the list.

I don’t think API providers are being malicious with rate limits in this area. I just think it is yet another area where technologists are blind to the way technology is making an impact (positive or negative) on the world around us. Staying in tune to the needs of applications that help people in their time of need seems like it will have to components, 1) knowing your applications (you should be doing this anyways) and identifying the ones that have a public service, and 2) staying in tune with natural and other disasters that are happening around the world. We see larger platforms like Facebook and Twitter rolling out permanent solutions to assist communities in their times of needs, and it seems like something that other smaller platforms should be tuning into as well.

Disaster support and considerations will be an area of API operations I’m going to consider adding into my research, and spending more time to identify best practices, and what platforms are doing to better serve communities in a time of need using APIs.


This Weeks Troubling API Patent

I found myself looped into another API patent situation. I’m going to write this up as I would any other patent story, then I will go deeper because of my deeper personal connection to this one, but I wanted to make sure I called this patent what it is, and what ALL API patents are–a bad idea. Today’s patent is for an automatch process and system for software development kit for application programming interface:

Title: Automatch process and system for software development kit for application programming interface

Patent# : US 20170102925 A1

Abstract: A computer system and process is provided to generate computer programming code, such as in a Software Development Kit (SDK). The SDK generated allows an application to use a given API. An API description interface of the system is operable to receive API-description code describing one or more endpoints of the API. A template interface is operable to receive one or more templates of code defining classes and/or functions in a programming language which can be selected by the selection of a set of templates. A data store is operable to use a defined data structure to store records of API description code to provide a structured stored description of the API. A code generation module is operable to combine records of API with templates of code which are arranged in sets by the language of the code they contain. The combining of records and code from templates may use pointers to a data structure which is common to corresponding templates in different sets to allow templates of selected languages to be combined with any API description stored.

Original Assignee: Syed Adeel Ali, Zeeshan Bhatti, Parthasarathi Roop, APIMatic Limited

If you have been in the API space for as long as I have you know that the generation of API SDKs using an API definition is not original or new, it is something that has been going on for quite some time, with many open source solutions available on the landscape. It is something everyone does, and there are many services and tooling available out there to deliver SDKs in a variety of languages for your API. It is something that just isn’t patent worthy (if there was such a thing anymore). It shouldn’t exist, and the authors should know better, and the US Patent Office should know better, but in the current digital environment we exist in there isn’t a lot of sensibility and logic when it comes to business in general, let alone intellectual property.

I haven’t pulled any patents as part of my API patent research in some time, so this particular patent hasn’t come across my radar. I was alerted to the patent via a Tweetstorm, shared with me in Slack by Adeel (one of the authors of the patent):

In which my response was, “oh. ouch. well, gonna have to think on this one sir. I’ll let you know my response, but I’m going to guess you might not like how I feel either. I’m not pro-patent. let me simmer on it for a couple days and I’ll share my thoughts.” As I was simmering I was pulled into the conversation again by Tony Tam:

In which my response was:

I was taking my usual time, gathering my thoughts in my notebook. Taking walks. Simmering. Then I wake up Saturday morning to this Tweet:

The comment did get me to Tweet a couple times, but ultimately I’m not going to engage or deviate from my initial response to gather my thoughts and write a more thoughtful post. The machine wants me to respond emotionally, fractionally, and in ways that can be taken out of context. This is how the technology space works, and keeps you serving it’s overlords–the money folks behind.

I do have to admit, when I first responded I was going to take a much harder line against APIMATIC, but after seeing the personal attacks on Adeel, his attempts to defend himself, then this no presence Twitter account coming at me personally, bringing into question that maybe I was being paid to write articles I changed the tone of this story significantly. The conversation around this patents shows what is wrong with the business and politics of APIs, more than anything API patents have done to the community to date.

First, Adeel doesn’t understand the entire concept of what Tony meant when he said patent, any more than he grasps what Tony meant by douchbag. This isn’t a jab at Adeel. It’s truth. Adeel is a Pakistani, living in New Zealand. I’ve spent the last couple of years engaging with him in person, and virtually via hangouts, Slack, and email. I’m regularly having to explain some very western concepts to him, and often find myself think deeply about what I’m saying to make sure I am bridging things properly. Not because Adeel isn’t smart (he’s exceptionally smart), it is just because of the cultural divide that exists between him and I.

Adeel didn’t file their patent out of some competitive ambition, or stealing of open source ideas as referenced in the Tweetstorm. He did it because it is what the academic environment where APIMATIC was born encouraged it, and it is something that is associated with smart ideas by the institution. This concept isn’t unique to New Zealand, it is something that still flourishes in the United States. Where the cultural bridge was necessary, is when it comes to why patents are bad. In these academic environments, you have a good idea, you patent it. It is something that historically acceptable, and encouraged. You see this across institutions and within enterprises around the globe, with patents as a measure of individual success. Adeel and his team had a good idea, so they patented it, he didn’t think he was doing anything wrong.

Addeel isn’t being any more aggressive or vindictive than Sal or Matt were with their hypermedia patents, or Jon Moore were with their patents. It is what their VCs, companies, and parent institutions encourage them to do with their good ideas (go after them and make your mark on the world). I fault all of these individuals just as much as I fault Adeel, or even Tony for that matter, when it comes to the name of an open source product suddenly becoming a trademarked product. Ironically this conversation is going up, right after a post regarding how much I respect Tony for his work, despite me be VERY upset about Swagger not remaining attached to an open source product we all had contributed so much to, and helped spread the word about. If we are worried about the sanctity of open source, we should be defending all dimensions of intellectual property. I know, not a perfect comparison, but I imagine Tony feels similar to what I did when this happened. Which I let him know via email, but have never gone after anyone individually, or personally, sicking my dogs on him, although I’m sure it might have felt that way.

All of this makes me think deeper about the relationship between open source and APIs. What are the responsibilities of companies who wrap open source technology with an API and offer a commercial service? How does licensing, trademarks, patents, and other intellectual property concerns need to be respected? I’m guessing I could go through many of the API patents in my research and find thatmost filings that did not consider or respect open source offerings that came before them. People had a good idea. They were in an environment that encourages patents, and they filed for a patent to show their idea was worthwhile–that USPTO stamp of approval is widely recognized as a way to acknowledge your idea is worthy.

I want to make clear. I am not defending patents. I am defending Adeel. I am doing so strongly, because of the no name person who decided to chime in and question the credibility of my API storytelling. Otherwise I would have laid things out, and told Adeel he needed to learn the lesson and move on. I just saw everyone piling on Adeel like he was a bad person, and some tech bro just patenting things to get the upper hand. I have known Adeel for years, and know that is not the situation. Once I started getting piled on as well, then it switched things into personal mode for me, and now I feel the need to defend my friend, but not his actions. The tech community loves to pile on, and I deal with wave after wave of tech bros who don’t read my work and accuse me of things that people who DO read my work would NEVER acuse me of. Adeel is my friend. Adeel is an extremely intelligent, honorable, kind-hearted person, and it pissed me off when people started piling on him.

Now that you understand my personal relationship, let’s address my business relationship. I am an advisor to APIMATIC. Not because I’m paid, or there are stock options, it is because he is my friend. In August (couple of weeks ago), Adeel and his investors had offered me a 0.25% share capital in the company but there has been no paperwork, nothing signed, and no deals made as of today. Honestly, after the $318 check I got for my latest advisory role, which I spent about 60 hours of work on, and travel costs out of my own pocket, I have little interest in pushing the conversations forward to the point where things ever get formalized. It just isn’t a priority for me, and damn sure has never influenced my writing about APIMATIC, let alone a story from June of 2015, like the Tweetstorm participant accused me of, without doing any due diligence on who I am. You are always welcome to ask ANY of my partners if I take money to write positive things about their products, or guest posts, you’ll get the same answer from all of them–it is something I get asked regularly, and just don’t give a shit about doing.

API patents are a bad idea. Patent #US 20170102925 A1, for an automatch process and system for software development kit for application programming interface is a bad idea. API patents are a bad idea because people think they are a sign of being smart when they are not. API patents are a bad idea because corporations, institutions, and organizations keep telling people it is a sign of being smart, and people believe it. API patents are a bad idea because the entire US Patent system is broken, because it is an intellectual property relic of a physical age, being leveraged in a digital realm where things are meant to be interoperable and reusable. API patents are bad idea because y’all keep doing them thinking they are a good idea, when they just open you up to being a tool that allows corporations to lock up ideas, and provide a vehicle for others to use them as leverage in a court of law, and behind closed doors negotiations. Ultimately whether or not patents are bad will come down to who litigates with their patent portfolios, and the patents they acquire. The scariest part is most of this leveraging and strong-arming won’t always happen in a public court, it will happen behind closed doors in arbitration, and with venture capital negotiations.

Which really brings me to the absurdity of this latest patent Tweetstorm. I’m all for showcasing that patents are a bad idea, and even shaming companies and individuals for doing them. However, I saw this one get personal a couple of times, and even took a jab at me. Y’all really shouldn’t do this shit in glass houses, because patents are just part of your intellectual property problems. The NDAs y’all are signing are a bad idea. Those deals you are making with VCs are mostly bad ideas. Y’all are just as ignorant, or maybe in denial about the deals you are making as Adeel is about the negative impacts of the US patent system. Most of these deals you make will result in your startup being wielded in more damaging ways than Adeel’s patent ever will. Yet you keep doing startups, signing deals, and attacking your competition, and people like me just investing in the community. Adeel is currently reading about patents, which I regret to say won’t provide him with the answers he is looking for. There are few books on the subject. Maybe reading something from Lawrence Lessig might get you part of the way there. The real answers you are looking for come from experience, and that is what you just received. This was an easy lesson, just wait until APIMATIC acquired by a bigco, and see your patents wielded in ways you never imagined–those will be some harder lessons.

I don’t have a problem with calling out Adeel for APIMATIC’s patent–it is what should happen. I have a problem with him being called douchbag, and the other personal attacks, and assumptions on his (and my) character that occurred within the Tweetstorm. By all means, call him out on Twitter. By all means, educate him around the damage to OSS and API community by doing patents. Don’t make these things personal, assume malicious intent, and recklessly begin questioning the character of everyone involved. Also, if you are just spending your time calling out you competitors for this behavior, because it bothers you, and you don’t call out your partners, and the companies you work for, and the services you use for their abusive use of OSS and damaging intellectual property claims, you have significantly weakened your argument, and won’t always be able to count on me to jump into the discussion and have your back. I do this shit full time, with no financial backing, corporate or institutional cover. I’m out here full time, not hiding behind a Twitter handle, holding folks accountable whenever and wherever I can. If you are doing a startup, learn from this story. Educate your institutions, companies, and investors about how API patents are damaging everything we are doing, and never will actually demonstrate you have a novel idea, it is just locking up other ideas.

In the end, everything we know as API is already patented. Look through the thousands of API patents in my research. Hell, Microsoft already has the patent on API definitions, so what does it matter if there is a patent on generating SDKs from API definitions? Also, my patent on API patents is pending, so it will all be game over then. Mwahahaha!! ;-) As Tony said, the patent system is broken. Let’s keep letting our companies, organizations, institutions, and most importantly the USPTO know that it is broken. Let’s keep calling out folks who still think API patents are a good idea (sorry Adeel), and schooling them on the why, but let’s not be dicks about it. Let’s not assume people have bad intentions. Let’s understand the history of patents and that many people are still taught that they are a sign of having good ideas and being what you do as part of regular business operations. Let’s just make sure we also lead by building a better API service or tooling, as Adeel brought up in his defense. Also, when he said this, he wasn’t making this an APIMATIC is better than Swagger Codegen (or others), he is just focused on making a better product in general. I’m sorry but he has. Y’all can focus on the merits of Swagger Codegen vs. APIMATIC, but what he’s done with the SDK docs, portal, and continuous integration, ARE great improvements. Much like the commercial service Swagger has become (ya know, the Swagger in Swagger Codegen) and evolved on top of the open source API specification and tooling Tony set into motion for ALL of us to build upon–thanks again Tony.

P.S. I wouldn’t have been so hard on Tony if I hadn’t been looped in to defend intellectual property and OSS like this.
P.S.S. I wouldn’t have been so easy on Adeel, if no name McTwitter account had accused me of selling out when I don’t do that.
P.S.S.S. Adeel, patents are bad because people use them in bad ways, and the US Patent Office is underfunded, understaffed, and can’t tell what is good or bad patents–this is the way bigcos want them.
P.S.S.S.S. Sorry I’m such an asshole everyone, but I hope y’all are getting somewhat used to it after seven years.

**Disclosure: **I am an advisor to APIMATIC (very proud of it)!


County Level Marijuana Regulation In California Using APIs

Counties across the State of California are scrambling to get everything in order now that marijuana is legal, and the 3rd party vendors working with the state are using an API to try and bridge the regulatory needs of each county, as they look to regulate the brand new industry. It sounds like the marijuana regulatory API isn’t 100% ready for prime time, but it is interesting to hear that state is looking to “mitigate the burden of counties” when it comes to production of marijuana using APIs.

I have been curating news about APIs in use across the growing marijuana industry, but this is the fist story I’ve written on the subject. Now that I’m seeing APIs use as part of the regulatory engine for the industry, things are getting a little more real, and not just be about finding seeds, stores, and other industry data. I’ll keep scratching around to see what I can find out about the software vendors mentioned in the article, and see if I can get my hands on any documentation, or a link to any active portal. I’m curious to see where this marijuana regulatory API train is headed.

Since the marijuana industry is a completely new one for cities, counties, and states to manage, there is an opportunity to leverage new technology like APIs as part of the interactions between government entities, with the help of 3rd party providers. Maybe there is even some opportunity for revenue generation on top of these APIs, allowing for government to fund the software development in a way that it could also be used across other government systems. Push things forward with the rollout and expansion of the marijuana industry, and use that to fund other government systems that lack the funds, and are often years behind. The problems states face in working with counties doesn’t stop with this new marijuana industry, and there are so many other aspects of government operations that could benefit from APIs helping “mitigate the burden of counties” trying to comply with state laws and regulations.


Looking At Facebook Blueprint As I Study API Training Programs

I am preparing a training section of my API Evangelist research, and part of the process involves learning about what other API providers and API service providers are up to in this area. On my list to look through is Facebook Blueprint, their training area for the platform. The courses present there aren’t specifically for the Facebook API, and is targeting primarily business uses, but the approach translates to API focused training materials, and showcases what is a priority for Facebook when it comes to educating their platform consumers.

As part of my API training research I want to understand the building blocks employed by Facebook so that I can apply as part of my API Evangelist training efforts, and help other API providers and service providers apply as part of their operations as well. Here are a few of the API training building blocks I found present:

  • Courses - A variety of online courses that teach you about everything Facebook.
  • Webinars - The webinars they provide around the content they are publishing.
  • Live - The live, in person workshops and courses they provide around the world.
  • Case Studies - Case studies of companies who have used Facebook courses.
  • Press - Press about the Facebook Blueprint, and how they are spreading the word.
  • Certifications - Facebook specific certifications that you can archive.
  • Exams - The tests that are available around the facebook courses.
  • How it Works - Some details about how Facebook training works.
  • Policies - The legal side of things, covering all of our bases.
  • FAQ - Some of the frequently asked questions around the training platform.
  • Support & Help - Where you can get support and more help when it comes to training.

Facebook breaks down their training into course categories and learning paths, providing two main ways for potential students to find what they are looking for. Facebook Blueprint provides…well, a blueprint that other API providers and service providers can consider when pulling together your trainings for your API training. Providing online, and offline courses is just the start, and as Facebook shows, there are some additional building blocks that need to be present for it to work.

I have some other API platform training areas to go through before I settle in on a strategy for my own API Evangelist training efforts. Ultimately I want my approach to become a blueprint that my customers can put to work. I’m even looking to turn my API training blueprint into a course by itself, so that I can help some of my customers with their training efforts, and even train the trainers in some situations, providing them with course content, and the awareness of the API space they need to be successful delivering courses within their company, organizations, institution, and government agency. In 2018, I’m investing in my storytelling on API Evangelist, as well as industry guides, white papers, and training material for my readers, and privately wherever it is needed.


Considering How Machine Learning APIs Might Violate Privacy and Security

I was reading about how Carbon Black, an endpoint detection and response (EDR) service, was exposing customer data via a 3r party API service they were using. The endpoint detection and response provider allows customers to optionally scan system and program files using the VirusTotal service. Carbon Black did not realize that premium subscribers of the VirusTotal service get access to the submitted files, allowing an company or government agency with premium access to VirusTotal’s application programming interface (API) can mine those files for sensitive data.

It provides a pretty scary glimpse at the future of privacy and security in a world of 3rd party APIs if we don’t think deeply about the solutions we bake into our applications and services. Each API we bake into our applications should always be scrutinized for privacy and security concerns, making sure end-users aren’t being subjected to unnecessary situations. This situation sounds like it was both API provider and consumer contributing to the privacy violation, and adjusting platform access levels, and communicating with API consumers would be the best path forward.

Beyond just this situation, I wanted to write about this topic as a cautionary tale for the unfolding machine learning API landscape. Make sure we are thinking deeply about what data and content we are making available to platforms via artificial intelligence and machine learning APIs. Make sure we are asking the hard questions about the security and privacy of data and content we are running through machine learning APIs. Make sure we are thinking deeply about what data and content sets we are running through the machine learning APIs, and reducing any unnecessary exposure of personal data, content, and media.

It is easy to be captivated by the magic of artificial intelligence and machine learning APIs. It is easy to view APIs as something external, and not much of a privacy or security threat. However, with each API call we are inviting a 3rd party API into our databases, files, and other private systems. Let’s make sure we have an honest conversation with our API providers about how data and content is accessed, stored, cached, and used as part of any AI or ML process. Let’s make sure we get clarification on which partners, or other 3rd party providers are getting access to data and content that is indexed and executed as part of AI and ML API requests and responses. How long are videos or images stored? How long is data stored?

I’m seeing more discussion around dependencies going on in the API space. Which software libraries, and APIs are we depending on for our applications and services. I’m feeling like this conversation is going to continue expanding and security, privacy, and observability is going to become a more significant part of these dependency discussions. It will be a conversation that continues to push API deployment on-premise, and on-premise, being observable about how ML and AI API operations are being logged, stored, and track on. I’m going to keep watching how APIs are intentionally or unintentionally violating security and privacy like this, and keep an eye on the API dependency conversation to see how it evolves as part of this security and privacy discussion.


Thank You Tony

Tony Tam, the creator of the OpenAPI specification, formerly known as Swagger, has announced he will be exiting his role at OAI and SmartBear. Tony says the specification is in good hands with Ron Ratovsky (@webron), Darrel Miller (@darrel_miller), and others in the OAI. Tony doesn’t give any hints about what he’ll be up to, but will be walking away from his baby entirely.

I have given Tony a hard time during the transition from Wordnik to SmartBear, and the creation of the OpenAPI, but I am a huge fan of what he has done, and super bummed to see him go–hoping he won’t leave the API community completely. There are many building blocks that go into doing APIs and OpenAPI, or Swagger, is the most significant single building block that has emerged in the seven years I’ve been doing API Evangelist. Swagger has had a profound impact on the world of APIs, and OpenAPI will continue doing this in the future, if the right conditions are still present across the API landscape.

Swagger has helped us talk about our APIs. Swagger has helped us collaborate around our APIs. Swagger has opened up a whole lifecycle of API tooling to help us along our journey. I always felt like Swagger reflected Tony’s personality, and with it’s evolution to OpenAPI, and the OpenAPI Initiative means it’s grown beyond it’s creator. OpenAPI is in good hands. I think it is a good time for Tony to step away, and feel like his baby has begun to grow up, becoming much bigger than what he can do on his own (even with Ron’s amazing help).

Thank you for all your work Tony. You made your mark on the API space. You managed to develop something that was useful for API documentation and code generation, but quickly became about design, testing, monitoring, and every other stops along the API lifecycle. I am stoked to have had the chance to work with you, and spend time telling stories about your important work. I hope you find some time to read some good books, and take time for yourself, and hopefully you don’t go to far from the API space, or at least come back and visit from time to time.


The First Question When Starting An API Is Always: Should We Be Doing This?

I was doing some more work on my list of potential female speakers from the API space. I have some slots to fill for @APIStrat, and I saw another API event was looking for suggestions when it came to speakers. A perfect time to invest some more cycles into finding female API talent. Twitter and Github is always where I go for discovery. I picked up where I left off working on this last time, turned on my search tools that use the Twitter and Github API, and got to work enriching the algorithm that drives my API talent search.

Next up on my task list was to deploy a name microservice, that would help me filter Twitter and Github users by gender. I’m interested in API folks of all type, but for this round I need to be able to weight by female. I found a list of the top names from the United States which had them broken down by gender. I copied and pasted into a Google Sheet, fired up a Github repository, and published the spreadsheet of data to Github as YAML–giving me a male.yaml, and female.yaml listing of names. I will be be use these names in a variety of web and API applications, but I wanted to be able to help filter any search results by a female name for this project. I understand the limitations of this approach, but it is good enough for what I am looking to accomplish today.

Next, I use my new name microservice as a filter for any Twitter or Github account I’m paying attention to as part of my API monitoring. Quickly giving me a list of accounts to look through as I am developing my list of women doing interesting things with APIs. Once I’m done I have a list of Twitter accounts, and Github accounts, I prepare them as a Google Sheet, then get ready to publish the YAML within a Github repository as my API women microservice. Then I pause. Should I be doing this? In this current online environment do I want to be building out lists of women, without their consent? Sure their Twitter and Github accounts are public, but should I be singling them out in an easy to access list? IDK.

I am doing this work for my @APIStrat conference, and I want to share the information easily with other conference folks. I also want to showcase women doing interesting things in the API space. However, I don’t want to help automate the targeting and harassment of these women. By publishing a machine readable list of their Twitter and Github accounts I’m doing the heavy lifting for any potential online troll. I’m going to simmer on this one for a week. Luckily I can easily publish the list as a private repository, and share with anyone who asks using Github, by just adding them as collaborator on the repository. This situation has reminded me that with each microservice that I publish I should be pausing and always asking myself if I should be doing this in the first place. What are the possibilities for abuse? Am I potentially making someone unsafe with the service I publishing?


Making Sense Of API Activity With Webhook Events

I was doing some webhooks research as part of my human services work and I found myself studying the types of events used as part of webhook orchestration for Github, Box, Stripe, and Slack. Each of the event type lists for each of these platforms tell a lot about what is possible with each API, and the webhooks that get triggered as part of these events show what is important to developers who are integrating with each of these APIs. These event type lists really help make sense of the API activity for each of these APIs, providing a nice list to follow when developing your integration strategy.

What I really like as I look through each of these webhook event lists is that they are usually in pretty plain language, describing events that matter, not just row updates with a timestamp. These events can be very broad, triggering a webhook when anything happens to a resource, or it could be granular and be all about a specific type of change, or anything in between. Each event type represents something API consumers want from an API, and would be normally polling the API for, but since there are webhook events, developers can get a push of data or a ping whenever an event occurs.

Another thing that the presence of webhooks, and a robust list of events represent for me is the maturity of a platform. Github, Box, Stripe, and Slack are all very mature and robust platforms. There are meaningful events defined, and the platform behaves as a two way street, accepting requests, but also making requests whenever a meaningful event occurs. I’m getting to a place where I feel like basic webhook infrastructure should be default for all API providers. The problem is I don’t think there are well enough defined models for API providers to follow when they are planning this part of their API operations. Something we will need to tackle as an industry pretty soon, helping make sure webhooks behave in consistent ways, and have a standard API interface for orchestrating with.


API Foraging And Wildcraft

I was in Colorado this last week at a CA internal gathering listening to my friend Erik Wilde talking about APIs. One concept he touched on was what he called API gardening, where different types of API providers approached the planting, cultivating, and maintenance of their gardens in a variety of different ways. I really like this concept, and will be working my way through the slide deck from his talk, and see what else I can learn from his work.

As he was talking I was thinking about a project I had just published to Github, which leverages the Google Sheets API, and the Github API to publish data as YAML, so I could publish as a simple set of static APIs. I’d consider this approach to be more about foraging and wildcrafting, then it is about tending to my API garden. My API garden just happens spread across a variety of services, often free and public space, in many different regions. I will pay for a service when it is needed, but I’d rather tend smaller, wilder, APIs, that grow in existing spaces–in the cracks.

I use free services, and build upon the APIs for the services I am already using. I publish calendars to Google Calendar, then pull data and public APIs using the Google APIs. I use the Flickr and Instagram APIs for storing, publishing, sharing, and integrating with my applications using their APIs. I pull data from the Twitter API, Reddit API, and store in Google Sheets. All of these calendar, image, messaging, and link data will ultimately get published to Github as YAML, which then is shared as XML, JSON, Atom, CSV via static APIs. I am always foraging for data using public services, then planting the seeds, and wildcrafting other APIs–in hopes something will eventually grow.

Right now Github is my primary jam, but since I’m just publishing static JSON, XML, YAML, and other media types, it can be hosted anywhere. Since it’s Github it can be integrated anywhere, directly via the static API paths I generate, or by actually cloning the underlying Github repository. I have a Jekyll instance that I can run on AWS EC2 instance or Docker container, and will be experimenting more with Dropbox, Google Drive, and other free, or low cost storage solutions that have a public component. I’m looking to have a whole toolbox of deployment options when it comes to publishing my data and content APIs across the landscape. Most of these are low use, long tail usage data and content I’m using across my storytelling, so I’m not worried about things being heavily manicured, just providing fruit in a sustained way for as low of a price as I possibly can.


API Deployment Comes In Many Shapes And Sizes

Deploying an API is an interesting concept that I’ve noticed folks struggle with a little bit when I bring it up. My research into API deployment was born back in 2011 and 2012 when my readers would ask me which API management provider would help them deploy an API. How you actually deploy an API varies pretty widely from company to company. Some rely on gateways to deploy and API from an existing backend system. Some hand-craft their own API using open source API frameworks like Tyk and deploy alongside your existing web real estate. Others rely on software as a services solutions like Restlet and Dreamfactory to connect to a data or content source and deploy an API in the clouds.

Many folks I talk with simply see this as developing their APIs. I prefer to break out development into define, design, deploy, and then ultimately manage. In my experience, a properly defined and designed API can be deployed into a variety of environments. The resulting OpenAPI or other definition can be used to generate server side code necessary to deploy an API, or maybe used in a gateway solution like AWS API Gateway. For me, API deployment isn’t just about the deployment of code behind a single API, it includes the question about where we are deploying the code, acknowledging that there are many places available when it comes to deploying our API resources.

API deployment can be permanent, ephemeral, or maybe just a sandbox. API deployment can be in a Docker container, which by default deploys APIs for controlling the underlying compute for the API you deploying. Most importantly, API deployment should be a planned, well-honed event. APIs should be able to be redeployed, load-balanced, and taken down without friction. It can be easy to find one way of deploying APIs, maybe using similar practices surrounding web deployments, or dependent on one gateway or cloud service. Ideally, API deployment comes in many shapes and sizes, and is something that can occur anywhere in the clouds, on-premise, or on-device. Pushing our boundaries when it comes to which platforms we use, which programming languages we speak, and what API deployment means.


HTTP Status Codes And The Politics Of APIs

The more I learn about the world of APIs, the more I understand how technology, business, and politics are all woven together into one often immovable monolith. Many things in the world of APIs seem purely like a technical thing, but in reality they are wrapped in, and wielded intentionally and unintentionally as part of larger business, and sometimes a personal agenda. An example of this can be found with the presence, or lack of presence with HTTP status codes, which the default status is usually 200 OK, 404 not found, or 500 internal error.

While these seem like very granular technical details of whether or not an HTML, XML, CSV, or JSON document is returned or not as part of a single web request, there usage often dictates what is happening behind the firewall, and often times more importantly, what is not happening. I find people’s awareness that HTTP status codes exist (or not) a significant sign of their view of the wider web world. If they are aware they exist they most likely have some experience engaging with other experienced partners using the web. If they don’t, they most likely live a pretty isolated existence–even if they do have a web presence.

Beyond just knowing that HTTP status codes exist, understanding the importance of, and the nuance surrounding each individual one demonstrates you are used to engaging with external actors at scale, leveraging web technology. I have to put out there that I DO NOT have an intimate knowledge of all HTTP status codes, because I have not exercised them as part of large scale projects, but it is something I do grasp the scope and importance of from the projects I have worked on. This is not a boolean thing, you knowing HTTP status codes or not. This is the result of many journeys, with many partners, across many different types of digital resources. You can tell how many journeys someone has been on, based on how they view, and wield HTTP status codes.

I encounter folks in my journeys who are dismissive of my focus on HTTP status codes, but I find most of these folks to be purposely vague in the signals they regularly send, and are used to keeping things complex, inconsistent, or just never very well defined so that they can keep things suiting their changing desire and needs. This is something that works well internally, and within a small group of trusted partners, but it rarely is conducive to doing business at scale on the web. Some companies thrive in chaos, because they have the upper hand. They have defined the chaos, and benefit from being in the know of how the chaos works, and invest heavily in keeping a gap between themselves, and those who are not in the know. Forcing external entities to always be on their toes when it comes to understanding what is going on, unable to truly ever know if things are indeed 200 OK, or most likely just a 500 internal server error.

In this situation I always think of my friend Darrel Miller (@darrel_miller) who as an API architect can recite every single HTTP status code, and a usage scenario because of his vast knowledge of APIs and web standards. However, a side effect of this reality is that if you ask Darrel almost any question, you will get a real honest, direct, and usually very precise answer. Some folks might look at this as Darrel not having filters, or often possessing too many opinions, but I’m a big fan of his approach. With Darrel, you rarely ever unclear about his motivations, feelings, and experience with any topic. I find this world much easier to navigate than the vague, unclear, often purposely obfuscated responses I get from some companies, organizations, institutions, and government agencies I work with. When doing business at scale on the web it always helps to provide clear signals that have a shared meaning, and being precise and upfront with what is going on behind the scenes, not hiding behind the black, white, and grey nature of just 200, 404, and 500.


The Patent Application Information Retrieval Bulk Data API

I stumbled across the Patent Application Information Retrieval Bulk Data API from the US Patent Office the other day. It provides a much more usable approach to getting at patent information than what I am using at the moment. Right now I am downloading XML files and searching for the occurrence of a handful of keywords. If I want to make a change I have to fire up a new AWS instance, change the code, and reprocess the downloaded files. The Patent Application Information Retrieval Bulk Data API gives me a much more efficient interface to work with.

The Patent Application Information Retrieval Bulk Data API contains the bibliographic, published document and patent term extension data tabs in Public PAIR from 1981 to present, with some additional data dating back to 1931. It has leveraged COTS semantics, maintains an open architecture, and the query syntax follows the standard Apache Solr search syntax, with API responses following the Solr formats. Providing for a much more powerful interface for querying patent data, which goes back further back in time then what I’ve been doing currently. I’m really interested in doing an API patent search for the 1990s, or maybe even earlier.

The Patent Application Information Retrieval Bulk Data API is a well designed API, with an attractive API portal and documentation, driven with an OpenAPI. The USPTO provide access to the patent data in the way that I think all government agencies should be doing it. You can use the API, or get at a JSON or XML download of the data. The API is “part of the US Patent and Trademark Office’s (USPTO) commitment to fostering a culture of open government as described by the 2013 Executive Order to make open and machine-readable data the new default for government information.” Which is pretty cool in my book, something we desperately are needing to become a reality across all federal agencies in 2017.


Testing Out The Concept Of API Transit Instead Of API Lifecycle

It isn’t often that I make up acronyms, terms, or phrases. For a number of years I pushed forward the concept of API reciprocity, but eventually conceded to the notion of integration platform as a service (iPaaS). Even with this failure I’m playing around with the evolution a new concept around how we map out our entire API operations, an area we commonly call the API lifecycle, but I’m exploring with calling it API transit.

When I think about the API lifecycle I am regularly reminded that it is something that is rarely a linear thing from define to deprecation, or even something that goes round and round in a circular format as the name lifecycle implies. This always brings me back to my API subway map work, with the development of my subway map API, and the keynote I gave on the subject at APIStrat in Austin. The subway map concept provides a pretty comprehensive way to map out the unlimited directions in which an API operations might take, and I’m looking to see if it can handle both the provider side needs, as well as that of the API consumer.

While I like the subway map analogy, I really like the more general concept of API transit. The concept of an APi subway map seems one dimensional, where API transit seems like it could handle multiple dimensions. I’m not sure there is a fit here, but I wanted to explore the definition of transit, as well as use the phrase in some API storytelling to test out a little bit, and see if it even is coherent. Ok, so what do my friends over at the Oxford Dictionaries API have to say on the subject?

etymologies:

  • late Middle English (denoting passage from one place to another): from Latin transitus, from transire ‘go across’

definitions:

  • the carrying of people or things from one place to another
  • the conveyance of passengers on public transport.
  • the action of passing through or across a place

I like the idea that API transit could be about the process of carrying people (users data) or things (anything digital) from one place to another – with an API transmit map helping visualize this. I like that the map can be used by API providers as a guide for all the stops along the API lifecycle, but instead of being linear or circular, it can be all the above. Each API transit line could visualize an API lifecycle, and help API providers deliver consistently along each layer of API operations. The same maps, and lines can also be used to help API consumers navigate all API resources available via any API transit district.

“Denoting passage from one place to another” could be applied to training API providers about what is now called the API lifecycle, but could also be a series of journeys along pre-specified API transit lines, teaching them about concepts around API design, deployment, management, testing, and other aspects of API operations. These same lines can be used to guide each stop along the API lifecycle, helping act as an assembly line for delivering, maintaining, and even deprecating APIs. Once APIs make it to production, the same API transit map can be used to help engage with API consumers, helping move the people and their things from one place to another.

I’m not 100% sold on the concept, and it is something I’m not sure I will keep using. I do want to invest more time into my API subway map, which I’m going to rebrand as API transit map for the next wave of development. I have the API working, I just need to get the routing to work properly, so that it will lay down each of my API transit lines in a coherent way, allowing them to overlap, and work with and around each other. I have a model in my head for how I can use to help teach folks about key concepts of API operations from definition to deprecation. I also have a model for how it can guide the delivery and management of APIs, acting as a map and framework for managing API operations. I just don’t have the API consumer portion of the equation. I’m guessing it will take me another couple years before this comes into focus for me, but I still enjoy working on the concept, and pushing it forward a little bit each year.


Where Are All The API Focused Agencies?

Earlier this week at the CA API Academy virtual gathering I spoke at in Boulder CO, the question around why there aren’t more API focused agencies came up. We were talking about the need for consulting services around common areas of API operations like design, deployment, management, testing, as well as training around API lifecycle related topics. We are seeing some movement in the area of API focused agencies, but not enough to cover the current demand.

We are seeing full service shops like APIvista, and Good API emerge. There is also movement on the agency level when it comes to integration platform as a service (iPaaS), over at Left Hook Digital, helping companies leverage Zapier, and integrate with API platforms. There is definitely significant movement in the number of API focused agencies, but we are going to need more to meet the demand for API design, deployment, management, testing, and other stops along the API lifecycle.

I’m guessing it will take a couple years for this side of the API business to mature. Then I’m figuring we will begin see to see more specialized API agencies emerge, helping with evangelism, support, API design, and maybe even industry focused iPaaS, similar to what we are seeing with LeftHook. As mainstream companies are waking up to the potential of APIs, they are going to need a lot of professional help to ensure they are successful in their API journey. We are going to need a number of general service, specialized, regional, and industry specific API agencies to help get us through the next couple of years.

I am focusing on trying to scale the API training portion of this need. I am ramping up development of API training courses that span my API lifecycle and API stack research. Helping API providers and consumers navigate the world of APIs. I’m talking with partners about working together to develop and distribute API training, and will be working to train some trainers at SMB, SME, and the enterprise. Eventually I’m guessing we will also be needing some API focused training agencies to help carry the load in being the last mile of delivery the API curriculum we will be developing.


The 85 Stops Along The API Lifecycle That I Track On

I am preparing a talk for tomorrow, and I needed a new list of each stop along the API lifecycle, and since each of my project exist as Github repositories, and are defined as a YAML and JSON data store, I can simply define a new liquid template for generating a new HTML listing of all the stops along the API lifecycle–after generating this list I figured I’d share here as a story.

Here are the 85 stops along the API lifecycle landscape from my vantage point as the API Evangelist:

Definitions
Design
Versioning
Hypermedia
DNS
Low Hanging Fruit
Scraping
Database
Deployment
Rogue
Microservices
Algorithms
Search
Machine Learning
Proxy
Virtualization
Containers
Management
Serverless
Portal
Getting Started
Documentation
Frequently Asked Questions
Support
Communications
Road Map
Issues
Change Log
Monitoring
Testing
Performance
Caching
Reliability
Authentication
Encryption
Vulnerabilities
Breaches
Security
Terms of Service (TOS)
Surveillance
Privacy
Cybersecurity
Reclaim
Transparency
Observability
Licensing
Copyright
Accessibility
Branding
Regulation
Patents
Discovery
Client
Command Line Interface
Bots
Internet of Things
Industrial
Network
IDE
SDK
Plugin
Browsers
Embeddable
Visualization
Analysis
Logging
Aggregation
iPaaS
Webhooks
Integrations
Migration
Backups
Real Time
Orchestration
Voice
Spreadsheets
Investment
Monetization
Plans
Partners
Certification
Acquisitions
Evangelism
Showcase
Deprecation

I’m always presenting my API lifecycle research as a listing, or in a linear fashion. I always feel like I should be creating an actual lifecycle visualization, but then I always end up feeling like I should just invest in my subway API map work, and create more robust way to represent how the API lifecycle truly looks.

Anyways, these 85 areas represent the scope of my API industry research, and provide a framework for thinking about not just the individual API lifecycle, but also the bigger picture of our API operations and partnerships. Not all of these areas apply to every API provider, but they do provide one perspective of the API landscape that all API providers can learn from. If there are any other stops along the lifecycle you think should be represented, I’d love to hear your thoughts. For example, I’m looking at adding two new areas: 1) training, and 2) fake. Helping me track on how API providers are training internally, with partners, and 3rd party developers, as well as think about the world of API driven fake news, products, advertising, users, and other illnesses emerging across the landscape.


Wildcard Webhook Events

I have been studying the approach of a variety of webhook implementations in preparation for an API consulting project I’m working on. Even though I’m very familiar with how webhooks works, and confident in my ability to design and develop a solution, I’m ALWAYS looking to understand what leading API providers are up to, and how I can improve my knowledge and awareness.

With his round of research, Github has provided me with several webhook nuggets for my API storytelling notebook. One of their web features I though was the notion of a wildcard webhook event:

Wildcard Event - We also support a wildcard (*) that will match all supported events. When you add the wildcard event, we’ll replace any existing events you have configured with the wildcard event and send you payloads for all supported events. You’ll also automatically get any new events we might add in the future.

I have been working to identify a set of objects and associated webhook events, and the notion of a wildcard event is interesting. It seems like you could apply this globally, or to specific objects / resources, allowing you to get pushes for any events that occur. I’m not sure I’ll have time to apply this feature in my current project, but it is worth adding to my webhook toolbox for future projects.

There are three other features I’ve extracted from Github’s approach to webhooks that I’ve added to my API storytelling notebook, to hopefully craft into future blog posts. I’ll be adding these all as potential webhook building blocks that API providers can consider. I’m hoping to find some more time and money to invest into my webhook research this fall, and be able to publish a formal guide for the world of webhooks. I’m always surprised by the lack of formal guidance when it comes to webhooks, and is something I’d like to see change in the near future.


The Importance Of API Stories

I am an API storyteller before am an API architect, designer, or evangelist. My number one job is to tell stories about the API space. I make sure there is always (almost) 3-5 stories a day published to API Evangelist about what I’m seeing as I conduct my research on the sector, and thoughts I’m having while consulting and working on API projects. I’ve been telling stories like this for seven years, which has proven to me how much stories matter in the world of technology, and the worlds that it is impacting–which is pretty much everything right now.

Occasionally I get folks who like to criticize what I do, making sure I know that stories don’t matter. That nobody in the enterprise or startups care about stories. Results are what matter. Ohhhhh reeeaaaly. ;-) I hate to tell you, it is all stories. VC investment in startups is all about the story. The markets all operate on stories. Twitter. Facebook. LinkedIn. Medium. TechCrunch. It is all stories. The stories we tell ourselves. The stories we tell each other. The stories we believe. The stories we refuse to believe. It is all stories. Stories are important to everything.

The mythical story about Jeff Bezos’s mandate that all employees needed to use APIs internally is still 2-3% of my monthly traffic, down from 5-8% for the last couple of years, and it was written in 2012 (five years ago). I’ve seen this story on the home page of the GSA internal portal, and framed hanging on the wall in a bank in Amsterdam. Stories are important. Stories are still important when they aren’t true, or partially true, like the Amazon mythical tale is(n’t). Stories are how we make sense of all this abstract stuff, and turn it into relatable concepts that we can use within the constructs of our own worlds. Stories are how the cloud became a thing. Stories are why microservices and Devops is becoming a thing. Stories are how GraphQL wants to be a thing.

For me, most importantly, telling stories is how I make sense of the world. If I can’t communicate something to you here on API Evangelist, it isn’t filed away in my mental filing cabinet. Telling stories is how I have made sense of the API space. If I can’t articulate a coherent story around API related technology, and it just doesn’t make sense to me, it probably won’t stick around in my storytelling, research, and consulting strategy. Stories are everything to me. If they aren’t to you, it’s probably because you are more on the receiving end of stories, and not really influencing those around you in your community, and workplace. Stories are important. Whether you want to admit it or not.


API Kindergarten For Business And IT Leaders

I’m working on a number of API courses and lessons lately. Some of these are API 101 courses, while others are more advanced courses for the seasoned API provider, and consumer. As I think about what is needed when it comes to classes and workshops across the API sector, I’m considering doing an API Kindergarten series, where business and IT leaders can learn the basics of doing business with APIs.

The curriculum for the API kindergarten program include hands on lessons on how to play nicely, get along with others, the importance of sharing, and helping them learn the important soft skills like not shitting your pants. I’m always surprised at the lack of basic skills by company, organizational, institutional, and government leadership when it comes to the essentials of why APIs work, and think a little primer on things might help some realize they shouldn’t be doing APIs in the first place, or maybe prevent some major crisis down the road.

The number of folks who tell me directly that they are all in on this API thing, yet when it comes to practice clearly are not has grown significantly in 2017. I feel like we need a whole series or curriculum to help make sure business and IT leadership is up for the challenge is desperately needed. Just wait, until I begin working on my sex edit course for the middle schoolers, where we teach them about safely integrating, and what is appropriate API behavior, and what is not. Honestly, I could spend days finding equivalences between the real world and doing APIs (not real world) and creating classes around them–fun stuff!


Which Platforms Have Control Over The Conversation Around Their Bots

I spend a lot of time monitoring API platforms, thinking about different ways of identifying which ones are taking control of the conversation around how their platforms operate. One example of this out in the wild can be found when it comes to bots, by doing a quick look at which of the major bot platforms own the conversation around this automation going on via their platforms.

First you take a look at Twitter, by doing a quick Google search for Twitter Bots:

Then you take a look at Facebook, by doing a quick Google search for Facebook Bots:

Finally take a look at Slack, by doing a quick Google search for Slack Bots:

It is pretty clear who owns the conversation when it comes to bots on their platform. While Twitter and Facebook both have information and guidance about doing bots they do not own the conversation like Slack does. Something that is reflected in the search engine placement. It is also something that sets the tone of the conversation that is going on within the community, and defines the types of bots that will emerge on the platform.

As I’ve said before, if you have a web or mobile property online today, you need to be owning the conversation around your API or someone eventually will own it for you. The same comes to automation around your platform, and the introduction of bots, and automated users, traffic, advertising, and other aspects of doing business online today. Honestly, I wouldn’t want to be in the business of running a platform these days. It is why I work so hard to dominate and own my own presence, just so that I can beat back what is said about me, and own the conversation on at least Google, Twitter, LinkedIn, Facebook, and Github.

Seems like to me, if you are going to enable automation on your platform via APIs, it should be something that you own completely, and make sure you provide some pretty strong guidance and direction.


The ElasticSearch Security APIs

I was looking at the set of security APIs over at Elasticsearch as I was diving into my API security research recently. I thought the areas they provide security APIs for the search platform was worth noting and including in not just my API security research, but also search, deployment, and probably overlap with my authentication research.

  • Authenticate API - The Authenticate API enables you to submit a request with a basic auth header to authenticate a user and retrieve information about the authenticated user.
  • Clear Cache API - The Clear Cache API evicts users from the user cache. You can completely clear the cache or evict specific users.
  • User Management APIs - The user API enables you to create, read, update, and delete users from the native realm. These users are commonly referred to as native users.
  • Role Management APIs - The Roles API enables you to add, remove, and retrieve roles in the native realm. To use this API, you must have at least the manage_security cluster privilege.
  • Role Mapping APIs - The Role Mapping API enables you to add, remove, and retrieve role-mappings. To use this API, you must have at least the manage_security cluster privilege.
  • Privilege APIs - The has_privileges API allows you to determine whether the logged in user has a specified list of privileges.
  • Token Management APIs - The token API enables you to create and invalidate bearer tokens for access without requiring basic authentication. The get token API takes the same parameters as a typical OAuth 2.0 token API except for the use of a JSON request body.

Come to think of it, I’ll add this to my API management research as well. Much of this overlaps with what should be a common set of API management services as well. Like much of my research, there are many different dimensions to my API security research. I’m looking to see how API providers are securing their APIs, as well as how service providers are selling security services to APIs providers. I’m also keen on aggregating common API design patterns for security APIs, and quantity how they overlap with other stops along the API lifecycle.

While the cache API is pretty closely aligned with delivering a search API, I think all of these APIs provide a potential building block to think about when you are deploying any API, and represents the Venn diagram that is API authentication, management, and security. I’m going through the rest of the Elasticsearch platform looking for interesting approaches to ensuring their search solutions are secure. I don’t feel like there are any search specific characteristics of API security that I will need to include in my final API security industry guide, but Elasticsearch’s approach has re-enforced some of the existing security building blocks I already had on my list.


Where To Begin With Webhooks For The Human Services Data API

I am getting to work on a base set of webhook specification for my human services data API work, and I wanted to take a fresh drive through a handful of the leading APIs I’m tracking on. I’m needing to make some recommendations regarding how human services data APIs should be pushing information via APIs, as we as providing APIs. Webhooks are fascinating to me because they really are just APIs in reverse. Webhooks are just an API request, where the target URL is a variable, allowing an API call to be made from a platform, to any target URL, on an triggering events, or on a schedule as a job.

Here are six of the API providers I took a look at while doing this webhook research:

All of these API providers offer webhooks, allowing developers to create an API call that will be fired off when a specific event occurs. These events are usually tied to a specific object. Box is documents. Github is a repository. Stripe is a payment. With human services it will be an organization, location, or service. There are a handful of key concepts at play when it comes to webhooks, making them an important part of the equation:

  • Object - The object in which an event is occurring. For this project it is organizations, locations, services, contacts, and potentially other elements of API operations.
  • Events - This is a list of events that can occur against all the objects that will trigger the execution of a webhook.
  • Target - The URL of the webhook. This is the variable of the outgoing API call, allowing them to be defined by API consumers, nd executed by the API provider.
  • Fat - The webhook will carry a payload, submitting a predefined schema, usually of the associated object to the target.
  • Ping - The webhook does not carry a payload, simply pinging the target of a webhook, letting API consumers know an even has occurred.
  • Status - The ability to identify the status of a webhook.
  • Errors - The errors that should be returned as part of webhook execution, and shared as it’s status.
  • Retries - Allowing for webhook execution to be replayed, executing a specific event that occurred in the past.
  • Signatures - Allowing for the signature of each webhook request to be verified for security and integrity purposes.
  • Test - Enable API cnosumers to test a webhook and see if it will work, sending over real or sample data.
  • History - Providing a complete history of webhook execution for API consumers to search, browse, and review.

The external focus, and their event based nature are the most notable characteristics of a webhook, but it is the transactional nature of their supporting systems that seem to make them such a utility that can help alleviate the load on APIs. There are a number of other characteristics of webhooks, but this gets at the core of what they do, and provide me with what I need to move my human services API conversations forward. I’m looking to have a handful of examples of webhook implementations for well-known API platforms to share with folks, and begin setting the stage for an initial API design and definition for a human services webhook implementation, based upon common practices already in use.

I find webhooks interesting because are not a standard, and there really aren’t much in the way of best practices. Just some common examples of how they are used by existing API providers. Webhooks are just APIs. It is just the target URL that is variable, and the objects and events that can be unique to each API platform. In human services implementations I don’t just see webhooks as being about making API calls to other applications. I see webhooks as something that can satisfy pushing of data between many different API implementations, beginning to set the stage for interoperability between hundreds or even thousands of human service providers, pushing and pulling data as needed, and prescribed by each API provider. Allowing human service providers to speak a common language and seamlessly share information across a variety of partnerships. Getting closer to Greg Bloom’s vision, the creator of Open Referral, and the person behind the Human Services Data Specification (HSDS), for how all of this should be working.

**Photo Credit: **REST and webhooks are two sides of the same coin. In “Web Hooks and the Programmable World of Tomorrow“, Jeff Lindsay, October 2008


Addressing Bulk API Operations As Separate Set Of Services

Part of the feedback I’ve received from the Human Services Data API (HSDA) evolution from v1.0 to v1.1 was that the API didn’t allow for volume or bulk GET, POST, PUT, or DELETE. This was intentionally in the incremental release which focused on just making sure the API reflected 100% of the surface are for the Human Services Data Specification (HSDS). I wanted to separate out the needs of bulk API consumers, so that I could think about it separately from the more simple, micro-use integrations the default Human Services Data API would accommodate. I don’t want the industrial grade needs of database and system administrators overriding the simple access needs of other individual API consumers.

To kick off my human services bulk API definition I wanted to spend some time looking at other bulk implementations from a handful of leading providers:

As I went through these implementations, and searched through Stack Overflow about bulk HTTP POSTs, I really didn’t see much difference on the technical front. Bulk APIs are primarily about acknowledging heavy duty data consumption, and primarily used HTTP POST for allowing the submission of either a) large individual POST, or b) large number of POST. Technically there isn’t much to them, where you start finding the nuance of bulk APIs over regular APIs is in the process surrounding the API implementation, things like having tasks, jobs, history, notifications, webhooks, email, and logging. Meaning there is just a lot more process and expectation around these APIs, which also most likely translates into more robust background architecture, and rules regarding the process involved with access.

Bulk APIs seem more about a separation of concerns. Bulk GET and POSTs require more infrastructure, and to do it properly you need process, and checks and balances to make sure a bulk operation is successfully executed, and there is sufficient history, notifications, and other events around bulk transactions. I’d say that bulk API operations should always be approached as a separate concern from the more mainstream web, mobile, single page application, embeddable, and spreadsheet applications. Providing a separate API subdomain, or paths allowing for the separation of infrastructure concerns, to make sure the API meets the availability concerns of both API provider and consumer. Next, I’ll spend some more time mapping out the task or job oriented structure of bulk APIs, but also for other API operations, to help make sure I do not reinvent the wheel when it comes to the design of my bulk API operations.


Some Microservice Thoughts Around My Human Services API Work

The Human Services Data API I have been working on is about defining a set of API paths for working with organizations, locations, and services that are delivering human services in cities around the world. As I’m working to evolve the OpenAPI for the Human Services Data API (HSDA), I’m constantly mindful of bloat, unnecessary expansion of features, and always working to separate things by concern. My thoughts have evolved this due to a hackathon I attended this week in San Francisco where a team at Optmizely worked to decouple an existing human services application from its backend and help teach it to speak Human Services Data Specification (HSDS)–allowing it to speak a common language around the services that us humans depend on daily.

As the hackathon team was decoupling the single page application (React) from the API backend (Firebase) I took the API calls behind and published to Github as two JSON files. One of the files was locations, which contained 217 human service locations in San Francisco, and metadata, which contained a handful of categories being used to organize and display the locations. In this situation, there is no notion of an organization, just 217 locations, offering human services across five categories. This legacy application, and forward engineering hackathon project was quickly becoming a microservices project, ironically it is a microservice project that was about delivering human services. ;-)

Looking at this unfolding project through a microservices lens, I needed to provide a single service. In the context of Link-SF, the original project, I needed to offer a service that would deliver 217 locations where people can find human services in the areas of food, housing, hygiene, medical, and technology via an web, or mobile application. To help me achieve my goals I began to step through each of the steps of the lifecycle of any self-contained microservices:

  • Github - Each of my services begins with a Github repository, so I created a new repo.
  • Definitions - I defined the service I wanted as an OpenAPI.
  • Design - I worked to keep the design of it as simple as I possibly can.
  • DNS - I relied on Github’s DNS for this service, but may setup my own subdomain.
  • Database - I published a YAML data file into the data folder for the Github repository, acting as the database for my service, leverage Github and Jekyll for helping me broker database connectivity.
  • Deployment - I deployed a simple static JSON API driven from the locations YAML store.
  • Management - I am using Github as the management platform for my API, helping me deploy and manage consumption of my service. I’m using the Github API as the programmatic layer for managing my service operations to help me continuously deploy and integrate with this service.
  • Portal - I am using Github pages as the portal for my service, providing both human and programmatic access to my service, making human service locations more available.
  • Documentation - I published static HTML, and Swagger UI documentation for the service.
  • Support - I have published a support page, and will be providing support via Github issues, Twitter, and email.
  • Communications - I have a blog published, and will be publishing the story of the service using the Jekyll CMS blogging solution.
  • Caching - The API, portal, and all aspects of the service I’ve launched is being cached by Github, riding on the backs of their Content Delivery Network (CDN).
  • Reliability - While Github has been known to have stability issues, it is still some of the most reliable way to host data and API driven services.
  • Encryption - I’m leveraging the Github DNS, and using their encryption in transit by default. I will be using my own subdomain and encryption in the future.
  • Security - I’m offloading platform security to Github. They have more resources than I do. I’m also looking to keep things more secure be keeping my services as static as possible.
  • Testing - I am setting up a series of monitors to ensure the service stays available, and delivering expected data and promised schema. I will be publishing a status update and history page.
  • Transparency - Everything involved with my service runs on Github, either as a public or private repository, with the entire lifecycle transparent publicly, or privately to whoever is given access to the repository using Github.
  • Observability - The entire service runs on Github. The entire lifecycle is observable in the Github history, and leverage their existing infrastructure for identity and access management (IAM), and continuous integration and deployment.
  • Discovery - There is an APIs.json available in the root of the project, providing a machine readable index of the schema, data, API, and other resources available via this service.
  • Client - I’ve published an HTML / JavaScript client on the home page of the project, with an editor for managing the data, which leverages the Github API for reading and writing data to the services repository.

I’m calling my new service, born out of the Link-SF legacy application Human Services Link. However, the only thing remaining of the Link-SF application is the data. I’ve forward engineered the schema, API, and web interface to all run on Github as a single, yet decoupled service. It is a reduction of the overall Human Services Data Specification (HSDS) schema, focusing in on just providing services at a handful of locations, reducing complexity and scope, to deliver a specific service–do one thing and do it well. I will be launching an Amazon EC2, and containerized version of this human services microservice, but I wanted to look at this one incarnation of it though the microservices lens. Sure, its not your classic microservices definition, but I think it holds up to a microservices test, it just makes some different architectural choices than the rest of the community might have made.

My human services microservice definition solves a single problem, for a specific type of end-user. It was originally geographically limited to San Francisco, but with with this evolution I’ve made sure there is a state/province field, as well as country, so that solution can be deployed for any city. I want the compute, database, DNS, API, portal, docs, and UI elements and admin tools to all be self contained for the delivery of a single microservice, but I wanted it all forkable, so you could launch this service over and over, in any city around the globe. There is still a lot of work to be done on this project before it is ready for prime time, but I’ve planted the seeds when it comes to delivering microservices in my world this way. I’m hoping it will be something I can’t ignore, and will keep pushing forward as part of my overall vision of how we can deliver sustainable API driven services.

I really like the static nature of this approach. I really like the forkability of this approach, and how you can use Github to separate out concerns by organization. I really like how I can offload much of the operation of my backend to Github, including security, CDN, and scalability / reliability. It is definitely not a pattern everyone should be following when doing microservices, but it does provide a static forkable pattern that can really help keep services very focused and self contained, but available to work in service of a bigger architectural picture. I’ll keep playing around with this approach to delivering human service microservices, and see what I can push forward and make stick. I’m hoping it can providing a low coast way for cash strapped municipalities to better deliver up to date information about human services to their entire population.


Decoupling The Business Of My Human Services Microservice

I’ve been looking at my human services API work through a microservices lens, triggered by the deployment of a reduced functionality version of the human services implementation I was working on this week. I’m thinking a lot about the technical side of decoupling services using APIs, but I want to also take a moment and think about the business side of decomposing services, while also making sure they are deployed in a way that meets both the provider and consumer side of the business equation. My human services microservice implementation is in the public service, which is a space where the business conversation often seems to disappear behind closed doors, but in reality needs to be front and center with each investment (commit) made into any service.

Let’s take a moment to look at the monetization strategy and operational plan for my human service microservice. Yes, a public data microservice should have a monetization strategy and plan for operating and remaining sustainable. The goals for this type of microservice will be radically different than it would be for a commercial microservice, but it should have one all the same.

  • Monetization - How am I evaluating the investment into this project alongside any value that is generated, which I can potentially capture or exchange some value for some money to keep going.
    • Acquisition - What did it take for me to acquire the data and skills necessary to make the deployment possible.
    • Development - What time was invested in setting up the platform, developing the schema, data, definitions, code, and visual elements.
    • Operations - What does it take to operate the service? Maintain it, answer questions, provide support, and other realities of providing an online service today.
    • Direct Value - What are the direct benefits of having this service available to people looking for human services, or organizations looking to provide human services.
    • Indirect Value - What are the indirect benefits of having this service available, like increased conversation around human services, or maybe traffic and awareness of the Open Referral organization.
    • Partners - What partnership opportunities are actively being sought out, or will be opened up by having this service available.
    • Reporting - What type of reporting is necessary to operate and monetize this service, from tracking page views to understanding who is integrated with the data, and consuming data via APIs, or possibly through continuous integration of the Github repository.
  • Plan - What is my plan for making this service available to the public, partners, or maybe internally use across my own projects.
    • Elements - My human services location API is designed to be publicly available, forkable, and reusable by anyone.
    • Limits - There are no limits on how each human services microservice can be used, or forked and reused. Ideally, any implementation provides attribution, and acknowledges the source of framework or data, but there really are no rules.
    • Metrics - I am measuring unique page views on each page, including of the machine readable YAML, JSON, and other formats. I’m also tracking on stars, forks, and commits for each service.
    • Commercial - Providing a clear track for commercial vendors to understand that the project needs their support, and can be improved upon, and evolved through their underwriting and support.

I need to have a coherent snapshot of what I’ve invested into each of my service. I need to have a base plan for how I will be executing the business side of this service–even if it just making something available for free. There are two dimensions to this conversation: 1) My view as the creator of this service 2) The view of folks who fork and implement as a service. Both dimensions should have a monetization snapshot, and a plan for executing within this business snapshot. I need all of this decoupled from any other service I am offering, but ideally they all use a common set of reusable patterns, just like the technical aspects of my microservices effort.

Just like needing the compute, database, DNS, and other technical layers to be stable and scalable across my microservices, I need the costs associated with these elements predictable and affordable. I need to know what it costs to define, design, deploy, manage, and deprecate my services. I need to know the best path forward for making them public, keeping them private, and being honest with commercial partners about the value that is being generated, both directly and indirectly. I need a way to report my costs, and revenue across hundreds or thousands of these services. I need to be able to scale this both horizontally across many services, but also vertically for single services which get deployed over and over, reused and continuously integrate wherever they are needed using Github.

I’ll keep applying this model across my human services project. I’m thinking that I’m going to be developing a whole buffet of human service microservices that run a 100% on Github like this, but I am also playing around with other varietals that run on other popular cloud platforms as well. I’m not one to subscribe to any particular API dogma, I’m just looking to explore what is possible, and do all of it as cheaply as I possibly scan, growing the number of projects I’m able to tackle. Of course, none of this would be possible without my partners funding my work, and helping me connect the technical and business aspects of doing API Evangelist.


API Platform FAQ And QA Responsibility

The discussion around whether or not you should be hosting your own questions and answers (QA) and frequently asked questions (FAQ) for your API has continued, with many of the leading API pioneers asserting responsibility over the operations of these important API resources. Amazon noticed that answers about their platform on Quora and Stack Exchange were usually out of date and often just plain wrong, prompting them to launch their own QA solution.

I have written about using API providers using Stack Overflow for may years now. It the last few years I’ve had my readers push back on this for a variety of reasons, from the Stack Overflow community being primarily a white male bro-fest, to finding things being unreliable, out of date, and often a pretty hostile and unfriendly place for people to try and learn about APIs. I’d say that I still use Stack Overflow for about 40% of my querying of API and programming related subjects, but since I’m a white male who has been doing software for 30 years, I’m a little more resistant to the bro-fest. But, I get it, and hear what folks are saying, and get it is not always a suitable environment.

Going back and forth on this subject, I’m back in the camp where API providers should be investing in operating their own QA, FAQ, and support forums. It’s definitely requires a significant amount of investment, policing, and sometimes taking stances that are unpopular, but if you are in this for the long game, it will be worth it. After watching AWS for a decade, you can see how incorrect information about your API operations can really begin to become a liability, and you might want to keep a tighter grip on where your API consumers go look for their answers. An added bonus is that you also get to set the tone for the types of questions that get answered, and the inclusiveness that will exist across your FAQ, QA, and Support.

I really need to get my core API design guides like definitions, design, deployment, and management out the door, because I need to diving into areas like support, and gather all my thoughts regarding how API providers should be approaching this critical layer of operations. I feel like support is one of the most defining characteristics of sustainable API providers, right up there with communications I’d say. I don’t care how good your API is technically, if you don’t have a solid approach to supporting and communicating with your API community, I’m guessing you won’t be around very long, or if you do survive, your platform will be something savvy API consumers steer clear of.


Investing The Time To Learn API Best Practices So You Do Not Reinvent The Wheel

I was on a call the other day with a group of people who are in the trenches of organizations and companies working hard to deliver human services in cities around the country. We were meeting to kick of the design phase of a new type of API, and after they shared all their thoughts via project documentation, they were asking me to help identify examples of best practices from the space. The group felt they didn’t have the time, or the awareness of what is going on to be able to identify the best practices that already exist across the space.

This is one of the reasons I stay out of the weeds of individual projects. I may help define, design, and even shadow the deployment and management, but I work hard to avoid the tractor beam of ongoing projects so that I can pay attention to the bigger picture and help share stories about what I’m seeing. I feel like there should be people like me in each industry helping shine a light on, and aggregating of best practices when it comes to the API life cycle. There is just too much work to be done, and it helps to have folks who have domain expertise, not just lightly understanding what is going on across many sectors like I do.

I think it is fine for the sector to depend on API analysts like me, but I think that groups should also be investing in the time to pick up their heads up and pay attention to what else is going on when it comes to APIs in their industry. I understand that many groups are busy keeping systems operational, and dealing with real world problems, but dedicated reading of blogs, white papers, and tuning into social channels for other API providers is important as well. Each decision made on API design, deployment, and management should be established from time spent reading, and learning about other existing approaches–minimizing the amount of wheel reinvention that occurs.

The amount of time you invest in learning about other successful API providers will pay off in the amount of time it saves you when defining, designing, deploying, and managing an API. Sadly, it is one of those areas that some companies aren’t always equipped to measure the return on investment from something like this, resulting in many companies limiting the amount of time API developers spend reading blogs, and actively researching existing implementations, leading design patterns, and healthy API practices. If your boss is ever giving you grief about the amount of time you are spending on learning about other APIs, make sure and point them in my direction. I’m happy to help them understand why you don’t want to be reinventing the wheel when it comes to web APIs.


Link Relation Types for APIs

I have been reading through a number of specifications lately, trying to get more up to speed on what standards are available for me to choose from when designing APIs. Next up on my list is Link Relation Types for Web Services, by Erik Wilde. I wanted to take this informational specification and repost here on my site, partially because I find it easier to read, and the process of breaking things down and publishing as a posts helps me digest the specification and absorb more of what it contains.

I’m particularly interested in this one, because Erik captures what I’ve had in my head for APIs.json property types, but haven’t been able to always articulate as well as Erik does, let alone published as an official specification. I think his argument captures the challenge we face with mapping out the structure we have, and how we can balance the web with the API, making sure as much of it becomes machine readable as possible. I’ve grabbed the meat of Link Relation Types for Web Services and pasted here, so I can break down, and reference across my storytelling.


  1. Introduction
    One of the defining aspects of the Web is that it is possible to interact with Web resources without any prior knowledge of the specifics of the resource. Following Web Architecture by using URIs, HTTP, and media types, the Web’s uniform interface allows interactions with resources without the more complex binding procedures of other approaches.

Many resources on the Web are provided as part of a set of resources that are referred to as a “Web Service” or a “Web API”. In many cases, these services or APIs are defined and managed as a whole, and it may be desirable for clients to be able to discover this service information.

Service information can be broadly separated into two categories: One category is primarily targeted for human users and often uses generic representations for human readable documents, such as HTML or PDF. The other category is structured information that follows some more formalized description model, and is primarily intended for consumption by machines, for example for tools and code libraries.

In the context of this memo, the human-oriented variant is referred to as “documentation”, and the machine-oriented variant is referred to as “description”.

These two categories are not necessarily mutually exclusive, as there are representations that have been proposed that are intended for both human consumption, and for interpretation by machine clients. In addition, a typical pattern for service documentation/description is that there is human-oriented high-level documentation that is intended to put a service in context and explain the general model, which is complemented by a machine-level description that is intended as a detailed technical description of the service. These two resources could be interlinked, but since they are intended for different audiences, it can make sense to provide entry points for both of them.

This memo places no constraints on the specific representations used for either of those two categories. It simply allows providers of aWeb service to make the documentation and/or the description of their services discoverable, and defines two link relations that serve that purpose.

In addition, this memo defines a link relation that allows providers of a Web service to link to a resource that represents status information about the service. This information often represents operational information that allows service consumers to retrieve information about “service health” and related issues.

  1. Terminology
    The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”,”SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be interpreted as described in RFC 2119 [RFC2119].

  2. Web Services
    “Web Services” or “Web APIs” (sometimes also referred to as “HTTP API” or “REST API”) are a way to expose information and services on the Web. Following the principles of Web architecture[ they expose URI-identified resources, which are then accessed and transferred using a specific representation. Many services use representations that contain links, and often these links are typed links.

Using typed links, resources can identify relationship types to other resources. RFC 5988 [RFC5988] establishes a framework of registered link relation types, which are identified by simple strings and registered in an IANA registry. Any resource that supports typed links according to RFC 5988 can then use these identifiers to represent resource relationships on the Web without having to re-invent registered relation types.

In recent years, Web services as well as their documentation and description languages have gained popularity, due to the general popularity of the Web as a platform for providing information and services. However, the design of documentation and description languages varies with a number of factors, such as the general application domain, the preferred application data model, and the preferred approach for exposing services.

This specification allows service providers to use a unified way to link to service documentation and/or description. This link should not make any assumptions about the provided type of documentation and/or description, so that service providers can choose the ones that best fit their services and needs.

3.1. Documenting Web Services
In the context of this specification, “documentation” refers to information that is primarily intended for human consumption.Typical representations for this kind of documentation are HTML andPDF.

Documentation is often structured, but the exact kind of structure depends on the structure of the service that is documented, as well as on the specific way in which the documentation authors choose to document it.

3.2. Describing Web Services
In the context of this specification, “description” refers to information that is primarily intended for machine consumption.Typical representations for this are dictated by the technology underlying the service itself, which means that in today’s technology landscape, description formats exist that are based on XML, JSON, RDF, and a variety of other structured data models. Also, in each of those technologies, there may be a variety of languages that a redefined to achieve the same general purpose of describing a Web service.

Descriptions are always structured, but the structuring principles depend on the nature of the described service. For example, one of the earlier service description approaches, the Web ServicesDescription Language (WSDL), uses “operations” as its core concept, which are essentially identical to function calls, because the underlying model is based on that of the Remote Procedure Call (RPC) model. Other description languages for non-RPC approaches to services will use different structuring approaches.

3.3. Unified Documentation/Description
If service providers use an approach where there is no distinction of service documentation Section 3.1 and service descriptionSection 3.2, then they may not feel the need to use two separate links. In such a case, an alternative approach is to use the”service” link relation type, which has no indication of whether it links to documentation or description, and thus may be better fit if no such differentiation is required.

  1. Link Relations for Web Services
    In order to allow Web services to represent the relation of individual resources to service documentation or description, this specification introduces and registers two new link relation types.

4.1. The service-doc Link Relation Type
The “service-doc” link relation type is used to represent the fact that a resource is part of a bigger set of resources that are documented at a specific URI. The target resource is expected to provide documentation that is primarily intended for human consumption.

4.2. The service-desc Link Relation Type
The “service-desc” link relation type is used to represent the fact that a resource is part of a bigger set of resources that are described at a specific URI. The target resource is expected to provide a service description that is primarily intended for machine consumption. In many cases, it is provided in a representation that is consumed by tools, code libraries, or similar components.

  1. Web Service Status Resources
    Web services providing access to a set of resources often are hosted and operated in an environment for which status information may be available. This information may be as simple as confirming that a service is operational, or may provide additional information about different aspects of a service, and/or a history of status information, possibly listing incidents and their resolution.

The “status” link relation type can be used to link to such a status resource, allowing service consumers to retrieve status information about a Web service’s status. Such a link may not be available from all resources provided by a Web service, but from key resources such as a Web service’s home resource.

This memo does not restrict the representation of a status resource in any way. It may be primarily focused on human or machine consumption, or a combination of both. It may be a simple “traffic light” indicator for service health, or a more sophisticated representation conveying more detailed information such as service subsystems and/or a status history.

  1. IANA Considerations
    The link relation types below have been registered by IANA perSection 6.2.1 of RFC 5988 [RFC5988]:

6.1. Link Relation Type: service-doc

Relation Name: service-doc
Description: Linking to service documentation that is primarily intended for human
consumption.
Reference: [[ This document ]]

6.2. Link Relation Type: service-desc

Relation Name: service-desc
Description: Linking to service description that is primarily intended for consumption by machines.
Reference: [[ This document ]]

6.3. Link Relation Type: status

Relation Name: status
Description: Linking to a resource that represents the status of a Web service or API.
Reference: [[ This document ]]


Adding Some Of My Own Thoughts Beyond The Specification This specification provides a more coherent service-doc, and service-desc that I think we did with humanURL, and support for multiple API definition formats (swagger, api blueprint, raml) as properties for any API. This specification provides a clear solution for human consumption, as well as one intended for consumption by machines. Another interesting link relation it provides is status, helping articulate the current state of an API.

It makes me happy to see this specification pushing forward and formalizing the conversation. I see the evolution of link relations for APIs as an important part of the API discovery and definition conversations in coming years. Processing this specification has helped jumpstart some conversation around APIs.json, as well as other specifications like JSON Home and Pivio.

Thanks for letting me build on your work Erik! - I am looking forward to contributing.


About api.data.gov

I’m going to borrow, modify, and improve on the content from api.data.gov, because it is an important effort I want my readers to be aware of, because I want more of them to help apply educate other federal agencies regarding why it is a good idea to bake api.data.gov into their API operations, and help apply pressure until EVERY federal agency is up and running using a common API management layer.

Ok, so what is api.data.gov? api.data.gov is a free API management service for federal agencies. Our aim is to make it easier for you to release and manage your APIs. api.data.gov acts as a layer above your existing APIs. It transparently adds extra functionality to your APIs and helps deal with some of the repetitive parts of managing APIs.

Here are the features of api.data.gov:

  • You’re in control: You still have complete control of building and hosting your APIs however you like.
  • No changes required: No changes are required to your API, but when it’s accessed through api.data.gov, we’ll transparently add features and handle the boring stuff.
  • Focus on the APIs: You’re freed from worrying about things like API keys, rate limiting, and gathering usage stats, so you can focus on building the next great API.
  • Make it easy for your users: By providing a standard entry point to participating APIs, it’s easier for developers to explore and use APIs across the federal government.

api.data.gov handles the API keys for you:

  • API key signup: It’s quick and easy for users to signup for an API key and start using it immediately.
  • Shared across services: Users can reuse their API key across all participating api.data.gov APIs.
  • No coding required: No code changes are required to your API. If your API is being hit through api.data.gov, you can simply assume it’s from a valid user.

api.data.gov tracks all the traffic to your API and give you tools to easily analyze it:

  • Demonstrate value: Understand how your API is being used so you can gauge the value and success of your APIs.
  • Visualize usage and trends: View graphs of the overall usage trends for your APIs.
  • Flexible querying: Drill down into the stats based on any criteria. Find out how much traffic individual users are generating, or answer more complex questions about aggregate usage.
  • Monitor API performance: We gather metrics on the speed of your API, so you can keep an eye on how your API is performing.
  • No coding required: No code changes are required to your API. If your API is being hit through api.data.gov, we can take care of logging the necessary details. Documentation

api.data.gov helps with publishing documentation for your API:

  • Hosted or linked: We can host the documentation of your API, or, if you already have your own developer portal, we can simply link to it.
  • One stop shop: As more agencies add APIs to api.data.gov, users will be able to discover and explore more government APIs all at one destination.

api.data.gov helps you rate limit because you might not want to allow all users to have uncontrolled access to your APIs:

  • Prevent abuse: Your API servers won’t see traffic from users exceeding their limits, preventing additional load on your servers.
  • Per user limits: Individual users can be given higher or lower rate limits.
  • No coding required: No code changes are required to your API. If your API is being hit, you can simply assume it’s from a user that hasn’t exceeded their rate limits.

api.data.gov is powered by the open source project API Umbrella. You can contribute to the development of this platform, or setup your own instance and run the entire stack yourself. If you’re interested in exploring any of this for your APIs, please contact us. In general, it’s easy to take any existing API your agency has (or is in the process of building) and put api.data.gov in front of it. This can be an easy way to get started and see what type of functionality api.data.gov might provide for your API.

api.data.gov is all about consistent API management across federal government, which means developers will be able to get at government data, content, and algorithms more efficiently, and integrate them into web, mobile, and other applications. We need more government agencies to be doing this, taking advantage of api.data.gov and get to work developing an awareness around who is consuming their API resources. Eventually API management will be how government agencies will be generating revenue on top of valuable API resources, charging commercial users enough so that each agency can cover the costs of operations, and hopefully make more of an investment in the resources they are making available.


Embeddable API Tooling Discovery With JSON Home

I have been studying JSON Home, trying to understand how it sizes up to APIs.json, and other formats I’m tracking on like Pivio. JSON Home has a number of interesting features, and I thought one of their examples was also interesting, and was relevant to my API embeddable research. In this example, JSON Home was describing a widget that was putting an API to use as part of its operation.

Here is the snippet from the JSON Home example, providing all details of how it works:

JSON Home seems very action oriented. Everything about the format leads you towards taking some sort of API driven action, something that makes a lot of sense when it comes to widgets and other embeddables. I could see JSON Home being used as some sort of definition for button or widget generation and building tooling, providing a machine readable definition for the embeddable tool, and what is possible with the API(s) behind.

I’ve been working towards embeddable directories and API stacks using APIs.json, providing distributed and embeddable tooling that API providers and consumers can publish anywhere. I will be spending more time thinking about how this world of API discovery can overlap with the world of API embeddables, providing not just a directory of buttons, badges, and widgets, but one that describes what is possible when you engage with any embeddable tool. I’m beginning to see JSON Home similar to how I see Postman Collections, something that is closer to runtime, or at least deploy time. Where APIs.json is much more about indexing, search, and discovery–maybe some detail about where the widgets are, or maybe more detail about what embeddable resources are available.


A Hack Day Event To Help The Link-SF App Speak Human Services Data Specification (HSDS)

I went up to San Francisco on Wednesday to participate in a social good hack day at Optimizely. They held their event at their downtown offices, where 20+ employees showed up to hack on some social good projects. Open Referral and our partner Benetech had suggested Human Services Data Specification (HSDS) as a possible project, which resulted in us being one of the hack projects for the event.

The Open Referral Human Services Data Specification (HSDS) team consisted of five Optimizely developers.

  • Derek Hammond - Software Engineer
  • Michael Fields - Software Engineer
  • Zachary Power - Software Engineering Intern
  • Quinton Dang - Software Engineer
  • Asa Schachar - Engineering Manager

The overall strength of the team leaned toward being front-end web and mobile developers, so we decided to “forward engineer” the Link-SF application, which provides a simple web or mobile application to help folks find a variety of human services in a handful of categories like food, housing, hygiene, medical, and technology. Link-SF is an ongoing collaboration between the Tenderloin Technology Lab and Zendesk, Inc., and we wanted to help contribute to their work, while also making the application potentially deployable by other cities and regions.

Once the team got to work on the project they identified that we could get at the data behind the SF application. The team decide they would forward engineer the dataset, the API, and the UI for the web and mobile application, making it all speak Human Services Data Specification (HSDS)–here is what they did:

  • Took the Link-SF datasets and saved as a single JSON file.
  • Converted the JSON schema to use HSDS – the changes weren’t significant.
  • Made it so that the app reads location data from a Github repository (you can change this to your url)
  • Updated the taxonomy fetch to use the new data
  • Ensured the UI worked with the new data

You can find the project in an Optimizely Github repository, which they are going to invest some more time into cleaning up this week–so don’t judge! ;-) If you want to run our app locally, you can save a file as config.jsand follow the instructions on the setup page. I am going to play with the application some more, and wait for them to clean up the page before we add the project to our official open source Human Services Data Specification (HSDS) solutions. Adding another HSDS compliant tool that any city, county, or other organization could deploy for their own purposes.

We didn’t have much time for the hackathon. It went from 2:30 PM to 8:30 PM, and I was impressed what the team was able to get done. I like their hack which used Github as a backend, and the speed at which they were able to work together to fork the application and make working using HSDS. It provides an interesting open source blueprint that other cities could also fork, and implement with their own localized datasets. Their work shows what is possible when you decouple the backend API (or JSON file) from the front-end application, and utilize a common schema and API definition. It transforms the application from a single use application into a multi-use application that can be used over and over again.

Thank you to Optimizely for putting on the hackathon. Thank you to Benetech for making the event happen. Thank you to the five developers who help moved the HSDS conversation forward. Once the Github repository gets cleaned up, I will spend some time playing with the application, and publish a follow-up story. I’d like to make the project push button deployable, so that organizations could launch a new instance of Link-[Any City] for low or no cost. We have another hackathon coming up in September on the east coast, and I learned a lot at this one about what I should have ready when it comes to help hackers be successful in the short time we have. It has been a while since I’ve attended a hackathon, and forgot how they can be a good vehicle for moving projects forward–at least the social good type of event. ;-)


Image Logging With Amazon S3 API

I have been slowly evolving my network of websites in 2017, overhauling the look of them, as well as how they function. I am investing cycles into pushing as much of my infrastructure towards being as static as possible, minimizing my usage of JavaScript wherever I can. I am still using a significant amount of JavaScript libraries across my sites for a variety of use cases, but whenever I can, I am looking to kill my JavaScript or backend dependencies, and reduce the opportunity for any tracking and surveillance.

While I still keep Google Analytics on my primary API Evangelist sites, as my revenue depends on it, whenever possible I keep personal projects without any JavaScript tracking mechanisms. Instead of JavaScript I am defaulting to image logging using Amazon S3. Most of my sites tend to have some sort of header image, which I store in a common public bucket on Amazon S3, all I have to do is turn on logging, and then get at logging details via the Amazon S3 API. Of course, images get cached within a user’s browser, but the GET for my images still gives me a pretty good set of numbers to work with. I’m not concerned with too much detail, I just generally want to understand the scope of traffic a project is getting, and whether it is 5, 50, 500, 5,000, or 50,000 visitors.

My two primary CDNs are Amazon S3 and Github. I’m trying to pull together a base strategy for monitoring activity across my digital footprint. My business presence is very different than my personal presence, but with some of my personal writing, photography, and other creations I still like to keep a finger on the pulse of what is happening. I am just looking to minimize the data gathering and surveillance I am participating in these days. Keeping my personal and business websites static, and with a minimum footprint is increasingly important to me. I find that a minimum viable static digital footprint protects my interests, maximize my control over my work, and minimizes the impact to my readers and customers.


Patent Number 9325732: Computer Security Threat Sharing

The main reason that I tend to rail against API specific patents is that much of what I see being locks up reflects the parts and pieces that are making the web work. I see things like hypermedia, and other concepts that are inherently about sharing, collaboration, and reuse–something that should never be patented. This concept applies to other patents I’m seeing, but rather than being about the web, it is about trust, and sharing of information. Things that shouldn’t be locked up, and exist within realms where the concept of patents actually hurt the web and APIs.

Today’s patent is out of Amazon, who are prolific patenters of web and API concepts. This one though is about the sharing of security threat sharing. Outlining something that should be commonplace on the web.

Title - Computer security threat sharing
Number - 09325732
Owner - Amazon Technologies, Inc.
Abstract - A computer security threat sharing technology is described. A computer security threat is recognized at an organization. A partner network graph is queried for security nodes connected to a first security node representing the organization. The first security node is connected to at least a second security node representing a trusted security partner of the organization. The second security node is associated with identification information. The computer security threat recognized by the organization is communicated to the trusted security partner using the identification information associated with the second security node.

I’m sorry. I just do not see this as unique, original, or remotely a concept that should be patentable. Similar to a previous patent I wrote about on trust, I just don’t think that sharing of security information needs to be locked up. The USPTO should recognize this. I feel like this type of patent shows how broken the patent process is, and how distorted company’s views on what is a patentable idea. Honestly, these types of patents feel lazy to me, and lack any creativity, skills, or sensible view of how the web works.

I feel like I should start rating these patents with some sort of Rotten Tomato score, and start giving companies some sort of patent ranking for their portfolio. Something that encompasses the scope, lack of creativity, originality, and damaging effects of the patent. This reminds me that I need to finish my work pulling court cases from the Court Listener API, and index them for any companies, and their patent portfolios. Ultimately this is where the real damage to APIs and the web will play out, similar to the Oracle vs. Google API copyright affair, but I will keep sharing stories of these ridiculous patents, and maybe even start ranking them all by how much they stink.


My Focus On Public APIs Also Applies Internally

A regular thing I hear from folks when we are having conversations about the API lifecycle, is that I focus on public APIs, and they are more interested in private APIs. Each time I hear this I try to take time and assess which parts of my public API research wouldn’t apply to internal APIs. You wouldn’t publish your APIs to pubic API search engines like APIs.io or ProgrammableWeb, and maybe not evangelizing your APIs at hackathons, but I’d say 90% of what I study is applicable to internal APIs, as well as publicly available APIs.

With internal APIs, or private network partner APIs you still need a portal, documentation, SDKs, support mechanisms, and communication and feedback loops. Sure, how you use the common building blocks of API operations that I track on will vary between private and public APIs, but this shifts from industry to industry, and API to API as well–it isn’t just a public vs. private thing. I would say that 75% of my API industry research is derived from public API operations–it is just easier to access, and honestly more interesting to cover than private ones. The other 25% of internal API conversation I’m having, always benefit from thinking through the common API building blocks of public APIs, looking for ways they can be more successful with internal and partner APIs.

I’d say that a significant part of the mindshare for the microservices philosophy is internally focused. I think this is something that will come back to hurt some implementations, cutting out many of the mechanisms and common elements required in a successful API formula. Things like portals, documentations, SDKs, testing, monitoring, discovery, support, communications all contribute to an API working, or not working. I’ve said it before, and I’ll say it again. I’m not convinced that there is any true separation in public vs private APIs, and there remains to be a great deal we can learn from public API providers, and put to work across internal operations, and with our most trusted partners.


Observability In Botnet Takedown By Government On Private Infrastructure

I’m looking into how to make API security more transparent and observable lately, and looking for examples of companies, institutions, organizations, politicians, and the government are calling for observability into wherever APIs are impacting our world. Today’s example comes out of POLITICO’s Morning Cybersecurity email newsletter, which has become an amazing source of daily information for me, regarding transparency around the take down of bot networks.

“If private companies cooperate with government agencies - for example, in the takedown of botnets using the companies’ infrastructure - they should do so as publicly as possible, argued the Center for Democracy & Technology . “One upside to compulsory powers is that they presumptively become public eventually, and are usually overseen by judges or the legislative branch,” CDT argued in its filing. “Voluntary efforts run the risk of operating in the dark and obscuring a level of coordination that would be offensive to the general public. It is imperative that private actors do not evolve into state actors without all the attendant oversight and accountability that comes with the latter.”

I’ve been tracking on the transparency statements and initiatives of all the API platforms. At some point I’m going to assemble the common building blocks of what is needed for executing platform transparency, and I will be including these asks of the federal government. As the Center for Democracy & Technology states this relationship between the public and private sector when it comes to platform surveillance needs to be more transparent and observable in all forms. Bots, IoT, and the negative impacts of API automation needs to be included in the transparency disclosure stack. If the government is working with platform to discover, surveil, or shutdown bot networks there should be some point in which operations should be shared, including the details of what was done.

We need platform transparency and observability at the public and private sector layer of engagement. Sure, this sharing of information would be time sensitive, respecting any investigations and laws, but if private sector infrastructure is being used to surveil and shut down U.S. citizens there should be an accessible, audit-able log for this activity. Of course it should also have an API allowing auditors and researchers to get all relevant information. Bots are just one layer of the API security research I’m doing, and the overlap in the bot world when it comes to API transparency, observability, and security is an increasingly significant vector when it comes to policing, surveillance, but also when it comes to protecting the privacy and safety of platform people (citizens).


Continuous Integration And Deployment For Government Procurement

I was reading the Open by Default Portal Procurement Pilot for the Treasury Board of Canada, where section 6, Licensing states: “To support the objectives of the open government initiative, the Solution must be open source and licensed in accordance with the Massachusetts Institute of Technology License (“MIT License”). Under the resulting contract, the Contractor will be required to deposit the Solution’s source code on the GitHub platform (https://github.com) – under the MIT License.”

This just seems like the way it should be for all government technology solutions. I’ve heard the naysayers in federal government say that proprietary software is the best route, but if it drives public infrastructure, in my opinion the code should be publicly available in this way. Honestly, code should be deployed at regular intervals throughout the development and deployment process, opening up the code to QA and security audits by the public, and 3rd parties. I hope this approach evolves into more of a continuous deployment and integration workflow when it comes to delivering software in government, where vendors have to plugin, open up their delivery cycles to more scrutiny, and leverage Github as the center of each procurement step from start to finish. Heck, let’s connect payments to each stop along the way.

I’m a proponent of this not just to make the delivery of government software more observable and accountable. I want this process out in the open to help other agencies learn from the journey. Tune into the process, and maybe reuse, build upon, and evolve existing solutions as part of their operations. I will keep an eye on what is going on up in Canada, when it comes to requiring vendors publish code to Github. I also know there are similar efforts in the U.S. and other countries which I’ll also start scratching at and learning more about. It is definitely a healthy pattern I’d like to see more of, and I will continue to invest time shining a light on any government taking a lead like Canada is.


An Open Source API Security Intelligence Gathering, Processing, And Distribution Framework

I was reading about GOSINT, the open source intelligence gathering and processing framework over at Cisco. “GOSINT allows a security analyst to collect and standardize structured and unstructured threat intelligence. Applying threat intelligence to security operations enriches alert data with additional confidence, context, and co-occurrence. This means that you are applying research from third parties to your event data to identify similar, or identical, indicators of malicious behavior.” The framework is written in Go, with a front-end in JavaScript frontend, and usage of APIs as threat intelligence sources.

When you look at configuration section on the README for GOSINT, you’ll see information for setting up threat intelligence feeds, including Twitter API, Alien Vault the Open Threat Community API, VirusTotal API, and the Collaborative Research Into Threats (CRITS). GOSINT acts as an API aggregator for a variety of threat information, which then allows you to scour the information for threat indicators, which you can evolve over time, providing a pretty interesting model for not just threat information sharing, but also API driven aggregation, curation and sharing.

GOSINT also has the notion of behaving as a “transfer station”, where you can export refined data as CSV or CRITS format. Right here seems like an opportunity for some Github integration, adding continuous integration and deployment to open source intelligence and processing workflows. Making sure refined, relevant threat information is available where it is needed, via existing API deployment and integration workflows. Wouldn’t take much to publish CSV, YAML, and JSON files to Github which can then be used to drive distributed dashboards, visualizations, and other awareness building tools. Plus, the refined threat information is now published as CSV/JSON/YAML on Github where it can be ingested by any system of application with access to the Github repository.

GOSINT is just one of the interesting tooling I’m coming across as I turn up the volume on my API security research, thanks to the investment of ElasticBeam my API security partner. They’ve invested in an API security guide, as well as white paper, which is something that will generate a wealth of stories like this along the way, as I find interesting API security artifacts. I’m looking to map out the API security landscape, but I’m also interested in understanding open source API aggregation, analysis, and syndication platforms that integrate with existing CI/CD workflows, to help feed my existing human services API work, and other city, state, and federal government API projects I’m working on.


Open Sourcing Your API Like VersionEye

I’m always on the hunt for healthy patterns that I would like to see API providers, and API service providers consider when crafting their own strategies. It’s what I do as the API Evangelist. Find common patterns. Understand the good ones, and the bad ones. Tell stories about both, helping folks understand the possibilities, and what they should be thinking about as they plan their operations.

One very useful API that notifies you about security vulnerabilities, license violations and out-dated dependencies in your Git repositories, has a nice approach to delivering their API, as well as the other components of their stack. You can either use VersionEye in the cloud, or you can deploy on-premise:

VersionEye also has their entire stack available as Docker images, ready for deployment anywhere you need them. I wanted have a single post that I can reference when talking about possible open source, on-premise, continuous integration approaches to delivering API solutions, that actually have a sensible business model. VersionEye spans the areas that I think API providers should consider investing in, delivering SaaS or on-premise, while also delivering open source solutions, and generating sensible amounts of revenue.

Many APIs I come across do not have an open source version of their API. They may have open source SDKs, and other tooling on Github, but rarely does an API provider offer up an open source copy of their API, as well as Docker images. VersionEye’s approach to operating in the cloud, and on-premise, while leveraging open source and APIs, as well as dovetailing with existing continuous integration flows is worth bookmarking. I am feeling like this is the future of API deployment and consumption, but don’t get nervous, there is still plenty of money to be be made via the cloud services.


A Fresh Look At The Embeddable Tools Built On The Twitter API

Over the years I have regularly showcased Twitter as an example API driven embeddable tools like buttons, badges, and widgets. In 2017, after spending some time in the Twitter developer portal, it is good to see Twitter still investing in their embeddable tools. The landing page for the Twitter embeddables still provides the best example out there of the value of using APIs to drive data and content across a large number of remote web sites.

Twitter has distinct elements of their web embeddables:

  • Tweet Button - That classic tweet button, allowing users to quickly Tweet from any website.
  • Embedded Tweets - Taking any Tweet and embedding on a web page showing its full content.
  • Embedded Timeline - Showing curated timelines on any website using a Twitter embeddable widget.
  • Follow Button - Helping users quickly follow your Twitter account, or your companies Twitter account.
  • Twitter Cards - Present link summaries, engaging images, product information, or inline video as embeddable cards in timeline.

Account interactions, messaging, posting, and other API enabled function made portable using JavaScript allowing it to be embedded and executed on any website. JavaScript widgets, buttons, and other embeddables are still a very tangible, useful example of APIs in action. Something I can talk about to anyone about, helping them understand why you might want to do APIs, or at least know about APIs.

We bash on Twitter a lot in the API community. However, after a decade of operation, you have to give it to them. They are still doing it. They are still keeping it simple with embeddable tools like this. I can confidently say that APIs are automating some serious illness on the Twitter API platform at the moment, and there are many things I’d like to be different with the Twitter API, but I am still pleased that I can keep finding examples from the Twitter platform to showcase on API Evangelist seven years of writing about them.


Patent #9397835, Web of trust management in a distributed system

I found a couple more API patents in my notebook that I wanted to get published. I try to take time regularly to publish the strangest API related patents I can find. Today’s patent is out of Amazon, which I find to be a fascinating outlet for patent storytelling. It isn’t squarely in the realm of APIs like some of my others, but I think tells a fascinating story by itself, showing how the web and the concept of a patent are colliding.

Title - Web of trust management in a distributed system
Number - 9397835
Owner - Amazon Technologies, Inc.
Publication Date - 2016-07-19
Application Date - 2014-05-21

Abstract - A web of trust is used to validate states of a distributed system. The distributed system operates based at least in part on a domain trust. A root of trust issues the domain trust issues a domain trust. Domain trusts are updatable in accordance with rules of previous domain trusts so that a version of a domain trust is verifiable by verifying a chain of previous domain trust versions._

I like that trust is being patented. Digital trust as a patentable concept that Amazon can now delegate if they choose. I’m just fascinated by what concepts are now fair game for patenting, as they enter into the digital realm. Now I’m curious how many physical trust patents might exist. Is the management of trust patented in the physical world in any way? I guess I could see some of the components for determining trust could be patented, but I find the fact that trust, or more specifically trust management is patentable, as a troubling thing.

It’s no secret that I’m anti API patents. I’m rarely convinced of the uniqueness of anything digital, warranting the issuing of a patent by the USPTO. I have to say that in a world where trust is patentable, the environment for suspect behavior will flourish. Pretty much what we are seeing play out on the web on a daily basis. I’m adding this patent to my API authentication and API security research, because it is a component that exists already in these areas, and should be readily available for any provider to execute as they see fit.


HTTP as a Substrate

I am spending a significant amount of time reading RFCs lately. I find the documents to be very cumbersome to read, but the more you read, the more tolerant you become. When I browse through RFCs I’m always reminded of how little I actually know about the web. In an effort to push forward my education, and maybe yours along the way, I’m going to be cherry picking specific sections of the interesting RFCs I’m digesting here on the blog. Today’s RFC is 3205, filed under Best Current Practice”, and is on the use of HTTP as a Substrate.

_Recently there has been widespread interest in using Hypertext Transfer Protocol (HTTP) [1] as a substrate for other applications- level protocols. Various reasons cited for this interest have included:

  • familiarity and mindshare,
  • compatibility with widely deployed browsers,
  • ability to reuse existing servers and client libraries,
  • ease of prototyping servers using CGI scripts and similar extension mechanisms, authentication and SSL or TLS,
  • the ability of HTTP to traverse firewalls, and
  • cases where a server often needs to support HTTP anyway.

The Internet community has a long tradition of protocol reuse, dating back to the use of Telnet as a substrate for FTP and SMTP. However, the recent interest in layering new protocols over HTTP has raised a number of questions when such use is appropriate, and the proper way to use HTTP in contexts where it is appropriate. Specifically, for a given application that is layered on top of HTTP:

  • Should the application use a different port than the HTTP default of 80?
  • Should the application use traditional HTTP methods (GET, POST, etc.) or should it define new methods?
  • Should the application use http: URLs or define its own prefix?
  • Should the application define its own MIME-types, or use something that already exists (like registering a new type of MIME-directory structure)?

This memo recommends certain design decisions in answer to these questions.

This memo is intended as advice and recommendation for protocol designers, working groups, implementors, and IESG, rather than as a strict set of rules which must be adhered to in all cases. Accordingly, the capitalized key words defined in RFC 2119, which are intended to indicate conformance to a specification, are not used in this memo._

I love the notion of HTTP as a substrate. The definition of substrate is “a substance or layer that underlies something, or on which some process occurs, in particular” or also, “the surface or material on or from which an organism lives, grows, or obtains its nourishment”. I also love the notion of providing guidance for this line of thought. There are many things contained in this document I have learned from my time in the space, and included in my storytelling, without much thought regarding where it originated, or how accurate it was. I particularly like the notion of HTTP as a material in which an organism lives, but maybe more of a digital organism, or a bot. A reminder that everything that may take seed, flourish and grow in this environment, might not always be a good thing.


API Message Integrity with JSON Web Token (JWT)

I don’t have any production experience deploying JSON Web Tokens (JWT), but it has been something I’ve been reading up on, and staying in tune with for some time. I often reference JWT as the leading edge for API authentication, but there is one aspect of JWT I think is worth me referencing more often–message integrity. JSON Web Token (JWT) is an open standard (RFC 7519) that defines a compact and self-contained way for securely transmitting information between parties as a JSON object.

JWT can not only be used for authentication of both message sender/receiver, it can ensure the message integrity as well, leveraging a digital signature hash value of the message body to ensure the message integrity during transmission. It adds another interesting dimension to the API security conversation, and while not be applicable in all APIs, I know many that it would make a lot of sense. Many of the networks we use today and applications we depend on today are proxied, creating an environment where message integrity should always come into question, and JWT gives us another tool in our toolbox to help us keep things secure.

I’m working my way through each layer of API operations, looking for aspects of API security that are often obscured, hidden, or just not discussed as they should be. I feel like JWT is definitely one track of API security that has evolved the conversation significantly over the last couple years, and is something that can make a significant impact on the space with just a little more storytelling and education. I’m going to make sure API request and response message integrity is a regular part of my API security storytelling, curriculum, and live talks that I develop.


Reducing Developers To A Transaction With APIs, Microservices, Serverless, Devops, and the Blockchain

A topic that keeps coming up in discussions with my partner in crime Audrey Watters (@audreywatters) about our podcast is around the future of labor in an API world. I have not written anything about this, which means I’m still in early stages of any research into this area, but it has come up in conversation, and reflected regularly in my monitoring of the API space, I need to begin working through my ideas in this area. A process that helps me better see what is coming down the API pipes, and fill the gaps in what I do not know.

Audrey has long joked about my API world using a simple phrase: “reducing everything to a transaction”. She says it mostly in jest, but other times I feel like she wields it as the Cassandra she channels. I actually bring up the phrase more than she does, because it is something I regularly find myself working in the service of as the API Evangelist. By taking a pro API stance I am actively working to reduce legacy business, institutional, and government processes down and breaking them down into a variety of individual tasks, or if you see things through a commercial lens, transactions.

Microservices
A microservices philosophy is all about breaking down monoliths into small bite size chunks, so they can be transacted independently, scaled, evolved, and deprecated in isolation. Microservices should do one thing, and do it well (no backtalk). Microservices should do what it does as efficiently as possible, with as few dependencies as possible. Microservices are self-contained, self-sufficient, and have everything they need to get the job done under a single definition of a service (a real John Wayne of compute). And of course, everything has an API. Microservices aren’t just about decoupling the technology, they are are about decoupling the business, and the politics of doing business within SMB, SME, enterprises, institutions, and government agencies–the philosophy for reducing everything to a transaction.

Containers
A microservice way of thinking about software that is born in the clouds, a bi-product of virtualization and API-ization of IT resources like storage and compute. In the last decade, as IT services moved from the basement of companies into the cloud, a new approach to delivering the compute, storage, and scalability needed to drive this new microservices way of doing business emerged that was called containers. In 2017 businesses are being containerized. The enterprise monolith is being reduced down to small transactions, putting the technology, business, and politics of each business transaction into a single container, for more efficient development, deployment, scaling, and management. Containers are the vehicle moving the microservices philosophy forward–the virtualized embodiment of reducing everything to a transaction.

Serverless
Alongside a microservice way of life, driven by containerization, is another technological trend (undertow) called serverless. With the entire IT backend being virtualized in the cloud, the notion of the server is disappearing, lightening the load for developers in their quest for containerizing everything, turning the business landscape into microservices, than can be distilled down to a single, simple, executable, scalable function. Serverless is the codified conveyor belt of transactions rolling by each worker on the factory floor. Each slot on a containerized, serverless, microservices factory floor possessing a single script or function, allowing each transaction to be executed, and replicated allowing it to be applied over and over, scaled, and fixed as needed. Serverless is the big metal stamping station along a multidimensional digital factory assembly line.

DevOps
Living in microservices land, with everything neatly in containers, being assembled, developed, and wrenched on by developers, you are increasingly given more (or less) control over the conveyor belt that rolls by you on the factory floor. As a transaction developer you are given the ability to change direction of your conveyor belt, speed things up, apply one or many metal stamp templates, and orchestrate as much, or as little of the transaction supply chain as you can keep up with (meritocracy 5.3.4). Some transaction developers will be closer to the title of architect, understanding larger portions of the transaction supply chain, while most will be specialized, applying one or a handful of transaction templates, with no training or awareness of the bigger picture, simply pulling the Devops knobs and levers within their reach.

Blockchain
Another trend (undertow) that has been building for sometime, that I have managed to ignore as much as I can (until recently) is the blockchain. Blockchain and the emergence of API driven smart contracts has brought the technology front and center for me, making it something i can ignore, as I see signs that each API transaction will soon be put in the blockchain. The blockchain appears to becoming the decentralized (ha!) and encrypted manifestation of what many of us has been calling the API contract for years. I am seeing movements from all the major cloud providers, and lesser known API providers to ensure that all transactions are put into the blockchain, providing a record of everything that flows through API pipes, and has been decoupled, containerized, rendered as serverless, and available for devops orchestration.

Ignorance of Labor
I am not an expert in labor, unions, and markets. Hell, I still haven’t even finished my Marx and Engels Reader. But, I know enough to be able to see that us developers are fucking ourselves right now. Our quest to reduce everything to a transaction, decouple all the things, and containerize and render them serverless makes us the perfect tool(s) for some pretty dark working conditions. Sure, some of us will have the bigger picture, and make a decent living being architects. The rest of us will become digital assembly line workers, stamping, maintaining a handful of services that do one thing and do it well. We will be completely unaware of dependencies, or how things are orchestrated, barely able to stay afloat, pay the bills, leaving us thankful for any transactions sent our way.

Think of this frontline in terms of Amazon Mechanical Turk + API + Microservices + Containers + Serverless + Blockhain. There is a reason young developers make for good soldiers on this front line. Lack of awareness of history. Lack of awareness of labor. Makes great digital factory floor workers, stamping transactions for reuse elsewhere in the digital assembly line process. This model will fit well with current Silicon Valley culture. There will still be enough opportunity in this environment for architects and cybersecurity theater conductors to make money, exploit, and generate wealth. Without the defense of unions, government or institutions, us developers will find ourselves reduced to transactions, stamping out other transactions on the digital assembly line floor.

I know you think your savvy. I used to think this too. Then after having the rug pulled out from under me, and the game changed around me by business partners, investors, and other actors who were playing a game I’m not familiar with, I have become more critical. You can look around the landscape right now and see numerous ways in which power has set its sights on the web, and completely distorting any notion of the web being democratic, open, inclusive, or safe environment. Why do us developers think it will be any different wit us? Oh yeah, privilege.


My URL Shortener Is Just An API With Postman As My Client

I have my own URL shortener for API Evangelist called apis.how. I use it to track the click through rates for some of my research projects, and partner sponsorships. I’ve had the URL shortener in operation for about two years now, and I still do not have any type of UI for it, relying 100% on Postman for adding, searching, and managing the URLs I am shortening, and tracking on.

My URL shortener just hasn’t raised to a level of priority where I’ll invest any time into an administrative interface, or dashboard for my URL shortener. I used Bitly and Google for a while, but I really just needed a simple shortening with basic counts, nothing more. When I bought the domain I launched a handful of API endpoints to support, allowing me to add, update, search, and remove URLs, as well as track the click throughs, and query how many clicks a link received for each mont. I can easily accomplish all of this through the Postman interface, making basic calls to my simple API–no over-engineering necessary.

I have been considering running a daily job that pulls view counts for URLs and publishing to Github as YAML, where I can drive a simple visualization, but honestly I’m not that numbers oriented. I like what API clients like Postman and Restlet provide. Even though I’m equipped to make calls using JavaScript, PHP, or CURL, I prefer just accessing via my web client–no coding necessary. I wouldn’t manage all my systems in this way, but for really basic ones like my URL shortener, I’m not sure I will ever actually evolve it beyond just being a simple API.


<< Prev Next >>