{"API Evangelist"}

My Developer Portal Checklist For A Human Services API

I was handed the URL for a human services API implementation for Miami. It was my job to now deploy a portal, documentation, and other supporting resources for the API implementation. This project is part of the work I'm doing with Open Referral to help push forward the API conversation around the human services data specification (HSDS).

I got to work forking my minimum viable API portal definition, to provide a doorway for the Miami Open211 API.  Next, I got to work on setting up a basic presence for the human services API. I started with giving the portal a title, and a basic description of what the service does, then I got to work on each of the portal elements that will help people put the data to work.

Getting Started
It can be hard to cut through what you need to get going with an API and cut through all the information available. The portal has a getting started page providing a basic introduction, a handful of links to the documentation, code, and where to get help--the page is driven from a YAML data store available in the _data folder for the repository.

Authentication
I included an authentication page to make it clear that the API is publicly available, but also provide a placeholder to explain that we will be opening up write access to the organizations, locations, and services that are being made available--the page is driven from a YAML data store available in the _data folder for the repository.

Frequently Asked Questions
Next, I wanted to always have the most frequently asked questions front and center where anyone can find it. I am using this page as a default place to publish any questions asked via Github, Twitter, or email. The page is driven from a YAML data store available in the _data folder for the repository.

Documentation
Now for the documentation, the most important piece of the puzzle. I published a Liquid documentation driven by the OpenAPI for the API. With a little bit of JavaScript voodoo, I was able to make the documentation interactive so that you can actually try out each path, and see the JSON response--the documentation is driven by the APIs.json and the OpenAPI for the API.

Code Samples
After completing the OpenAPI definition for the API documentation, I used the machine-readable definition to generate code samples using swagger code-gen. I published C#, Go, Java, JavaScript, PHP, Python, and Ruby code samples to help developers get started with their projects. All the language samples are published to a separate Github repository and the page is driven from a YAML data store available in the _data folder for the repository.

Postman Collection
To help jumpstart integration, I also generated and published a Postman Collection, so that anyone can quickly import it into their client, and get to work playing with the API in their environment. You can do this with OpenAPI also, but Postman helps extend the possibilities--the Postman Collection is editable via its Github page

Road Map
Next, I published a road map so we could share what is next for the project, providing a list where developers can stay in tune with what is going to happen. The road map entries are pulled from the Github issues for any entry with the road map label. There is a script that can be run regularly to keep the issues in sync with the roadmap, and the page is driven from a YAML data store available in the _data folder for the repository.

Issues
Similar to the road map, I created a page for sharing any open Github issues, that are labeled 'issues', to help communicate outstanding and known issues with the platform. It stays in sync using the same script, and the page is driven from a YAML data store available in the _data folder for the repository

Change Log
In addition to a road map and known issues, when these items get accomplished or fixed they get moved to the change log, keeping a complete history of everything that has changed with the platform. It stays in sync using the same script, and the page is driven from a YAML data store available in the _data folder for the repository

Status Page
Beyond the resources to get up and running with documentation, code samples, and a road map, issues, and change log to stay in tune with the platform, I wanted a status page keeping an eye on things. I signed up for a monitoring service called API Science (which I highly recommend), and imported the OpenAPI definition, and had monitors to keep an eye on things and make sure the API stays up. The page is generated from an embeddable JavaScript widget and is updated using the API Science API.

Terms of Service
For the terms of service, I just grabbed an open source copy from Wikidot, providing a baseline place to start when it comes to the TOS for the API-the terms of service is editable via its Github page

Privacy Policy
Similar to the terms of service I just grabbed an open source privacy policy from Wikidot, providing a baseline place to start when it comes to a privacy policy for the API-the privacy policy is editable via its Github page.  

Developer Blog
The blog for the project is driven by the Jekyll framework for the developer portal hosted on Github Pages. To manage the blog entries, you just add or update pages in the _posts folder for the website. All entries in the _post folder are listed in chronological order on the blog page for the developer portal.

Github
This developer portal runs 100% on Github and leverages the potential of Jekyll when running on Github. The API is hosted on Heroku and run by someone else, but the developer portal is a static website, completely editable via Github through the web interface, API, or locally with the desktop client. Github also provides much of the support framework for the project, driving the roadmap, issues, change log, and 1/3 of the support options for developers--the entire site is driven from the _data store, with the website just being a Liquid-driven Jekyll template.

OpenAPI
This developer portal is defined by its OpenAPI definition. It drives the documentation, generated the code samples, fired up the API Science monitors, and is the central contract defining the APIs operation. I will be keeping the OpenAPI up to date, and using it as the central truth for the API and its operations. 

APIs.json
The portal is entirely indexed using APIs.json, providing a single machine-readable definition of the API and its operations. All the supporting pages of the API are linked to in the index, and their contents and data are all machine-readable YAML and JSON. The APIs.json provides access to the OpenAPI which describes the surface area of the API, as well as providing links to all it's supporting operations.

What Is Next?
I'm going to put things down for a couple days. We still need some FAQs entered, and the content needs fluffing up and editing. Then we'll invite other folks to take a look at. Then I will get to work on the POST, PUT, PATCH, and DELETE paths, and once those are certified I will push as part of the OpenAPI, regenerate the code samples, and turn on the ability for people to get involved not just reading data, but also potentially adding and updating data in the database--making things a group effort.

I'm going to take what I have here, and fork it into a new project, making it a baseline demo portal for Open Referral. My goal is to have a downloadable, forkable API portal that is well documented, and anyone providing an HSDS compliant API can put to use for their project. I just wanted to take a moment and gather my thoughts on what I did and share the approach I took with you.


New York Times Manages Their OpenAPI Using Github

I come across more companies managing their OpenAPI definition as a single Github repository. One example of this is from the New York Times, who as the API definitions for their platform available as its own Github repository. It demonstrates the importance of maintaining your API definitions separately from any particular implementation, such as just your documentation.

You can find Individual OpenAPIs for their archive_api, updated description, article_search,books_api, community, geo_api, most_popular_api, movie_reviews, semantic_api, times_tags, timeswire, top_stories broken down into separate folders within the Github repository. The NYT also provides markdown documentation, alongside the machine-readable OpenAPI definition in each folder, helping make sure things are human-readable.

It just makes sense to manage your API definitions this way. It's more than just documentation. When you do this, you are taking advantage of the repository and version control features of Github, but you also open things up for participation through forking and pull requests. The resulting definition and machine readable contract can then be injected anywhere into the integration and API lifecycle, internally or externally.

I personally like it when companies manage their API definitions in this way. It gives me a central truth to work with when profiling their operations, something that will be used across my research and storytelling. The more you describe your APIs in this way, the more chance I will be writing about them and including them across my work.


Mapping Github Topics To My API Evangelist Research

I was playing around with the new Github topics, and found that it provides an interesting look at the API space, one that I'm hoping will continue to evolve, and maybe I can influence.

I typed 'api-' into Github's topic tagging tool for my repository, and after I tagged each of my research areas with appropriate tags, I set out exploring these layers of Github by clicking on each tag. It is something that became quite a wormhole of API exploration.

I had to put it down, as I could spend hours looking through the repositories, but I wanted to create a machine-readable mapping to my existing API research areas, that I could use to regularly keep an eye on these slices of the Github pie--in an automated way.

Definitions - These are the topics I'm adding to my monitoring of the API space when it comes to API definitions. I thought it was interesting how folks are using Github to manage their API definitions.

I like how OpenAPI is starting to branch out into separate areas, as well as how this area touches on almost every other area liste here. I am going to work to help shape the tags present based on the definitions, templates, and tooling I find on Github in my work.

Design - There was only one API design related item, but is something I expect to expand rapidly as I dive into this area further.

I know of a number of projects that should be tagged and added to the area of API design, as well as have a number of sub-areas I'd like to see included as relevant API design tags.

Deployment - Deployment was a little tougher to get a handle on. There are many different ways to deploy an API, but these are the ones I've identified so far.

I know I will be adding some other areas to this area quickly. Tracking on database, containerized, and serverless approaches to API deployment.

Management - There were two topics that jumped out to me for inclusion in my API management research.

As with all the other areas, I will be harassing some of the common API management providers I know to tag their repositories appropriately, so that they show up in these searches.

Documentation - There are always a number of different perspectives on what constitutes API documentation, but these are a few of these I've found so far.

I think that API console overlaps with API clients, but it works here as well. I will work to find a way to separate out the documentation tools, from the documentation implementations.

SDK - It is hard to identify what is an SDK. It is a sector of the space I've seen renewed innovation, as well as bending of the definition of what is a development kit.

I will be looking to identify language-specific variations as part of this mapping to API SDKs available on Github, making discoverable through topic searches.

API Portal - It was good to see wicked.haufe.io as part of an API portal topic search. I know of a couple of other implementations that should be present, helping people see this growing area of API deployment and management.

This approach to providing Github driven API templates is the future of both the technical and business side of API operations. It is the seed for continuous integration across all stops along the API lifecycle.

API Discovery - Currently it is just my research in the API discovery topic search, but it is where I'm putting this area my work down. I was going to add all my research areas, but I think that will make for a good story in the future.

API discovery is one of the areas I'm looking to stimulate with this Github topics work. I'm going to be publishing separate repositories for each of the API I've profiled as part of my monitoring of the API space, and highlighting those providers who do it as well. We need more API providers to publish their API definitions to Github, making it available to be applied at every other stop along the API lifecycle.

I've long used Github as a discovery tool. Tracking on the Github accounts of companies, organizations, institutions, agencies and individuals is the best way to find the meaningful things going on with APIs. Github topics just adds another dimension to this discovery process, where I don't have to always do the discovery, and other people can tag their repositories, and they'll float up on the radar. Github repo activity, stars, and forks just give an added dimensions to this conversation.

I will have to figure out how to harass people I know about properly tagging their repos. I may even submit a Github issue for some of the ones I think are important enough. Maybe Github will allow users to tag other people's projects, adding another dimension to the conversation, while giving consumers a voice as well. I will update the YAML mapping for this project as I find new Github topics that should be mapped to my existing API research.


A Machine Readable Definition For Your AWS API Plan

I was learning about the AWS Serverless Developer Portal, and found their API plan layer to be an interesting evolution in how we define the access tiers of our APIs. There were a couple different layers of AWS's approach to deploying APIs that I found interesting, including the AWS marketplace integration, but I wanted to stop for a moment and focus in on their API plan approach.

Using the AWS API Gateway you can establish a variety of API plans, with the underlying mechanics of that plan configurable via the AWS API Gateway user interface or the AWS API Gateway API. In the documentation for the AWS Serverless Developer Portal, they include a JSON snippet of the configuration of the plan for each API being deployed.

This reminds me that I needed to take another look at my API plan research, and take the plan configuration, rate limit, and other service composition API definitions I have, and aggregate their schema into a single snapshot. It has been a while since I worked on my machine-readable API plan definition, and there are now enough API management solutions with an API layer out there, I should be able to pull a wider sampling of the schema in play. I'm not in the business of defining what the definition should be, I am only looking to aggregate what others are doing.

I am happy to see more folks sharing machine-readable OpenAPI definitions describing the surface area of their APIs. As this work continues to grow we are going to have to also start sharing machine-readable definitions of the monetization, plan, and access layers of our API operations. After I identify the schema in play for some of the major API management providers I track on, I'm going to invest more work into my standard API plan definition to make the access levels of APIs more discoverable using APIs.json.


Will The Experian API Focus On The People Being Ranked?

I was reading about Experian the credit score company "ventures nimbly into the API economy" this week. I'm happy to see any company begin their API journey, especially companies whose important algorithms impact our lives in such a major way. APIs are critical when it comes to shining a light on how algorithms work and don't work.

According to the Experian developer page, "the Experian Connect API provides easy access to embed credit functionality on your websites and mobile apps. Consumer-empowered sharing allows you to create products and services for previously unreachable markets". Sadly I can't see much about the API itself, as you have to fill out a form and request access to see documentation or anything beyond just a basic description.

The Experian Connect API says you get access to credit scores and reports, but most of it sounds like your standard marketing speak. I find API documentation and actually playing with an API provide a much more honest take on what's going on. Their approach reflects what I've seen from other very secretive companies who are used to maintaining tight control over their algorithms, processes, and partnerships. 

Another aspect of the Experian Connect API I noticed as I read was it doesn't focus on solutions for the end-users who is being scored and reported on, it is designed for businesses looking to pull reports on people for a variety of purposes. When you read the story about the Experian API, and the description on the Experian developer page, it is all very business focused--centering on the opportunity for businesses, not the people it scores and ranks. 

Credit scores are one of the OG big data companies, long tracking information on people, and selling access to business and the government. While Experian's venture into the API economy is worth noting, I'm guessing their API journey won't be a very public one. It's just not in their DNA. I wish they would see the value of incentivizing their partners to better serve the humans at the center of the service, and be more transparent about their algorithm(s), but I'm not going to hold my breath. 

Not all APIs will be open by default. All we can do is tell stories that help convince companies to open up their APIs, be more transparent, allowing their algorithms to be more observable by everyone who impacted along the way.


The AWS Serverless API Portal

I was looking through the Github accounts for Amazon Web Services and came across their Serverless API Portal--a pretty functional example of a forkable developer portal for your API, running on a variety of AWS services. It's a pretty interesting implementation because in addition to the tech of your API management it also helps you with the business side of things. 

The AWS Serverless Developer Portal "is a reference implementation for a developer portal application that allows users to register, discover, and subscribe to your API Products (API Gateway Usage Plans), manage their API Keys, and view their usage metrics for your APIs..[]..it also supports subscription/unsubscription through a SaaS product offering through the AWS Marketplace."--providing a pretty compelling API portal solution running on AWS.

There are a couple things I think are pretty noteworthy:

  • Application Backend (/lambdas/backend) - The application backend is a Lambda function built on the aws-serverless-express library. The backend is responsible for login/registration, API subscription/unsubscription, usage metrics, and handling product subscription redirects from AWS Marketplace.
  • Marketplace SaaS Setup Instructions - You can sell your SaaS product through AWS Marketplace and have the developer portal manage the subscription/unsubscription workflows. API Gateway will automatically provide authorization and metering for your product and subscribers will be automatically billed through AWS Marketplace
  • AWS Marketplace SNS Listener Function (Optional) (/listener) - The listener Lambda function will be triggered when customers subscribe or unsubscribe to your product through the AWS Marketplace console. AWS Marketplace will generate a unique SNS Topic where events will be published for your product.

This is the required infrastructure we'll need to get to what I've been talking about for some time with my wholesale API and virtual API stack stories. Amazon is providing you with the infrastructure you need to set up the storefront for your APIs, providing the management layer you will need, including monetization via their marketplace. This is a retail layer, but because your infrastructure is setup in this way, there is no reason you can't sell all or part of your setup to other wholesale customers, using the same AWS marketplace.

I had AWS marketplace on my list of solutions to better understand for some time now, but the AWS Serverless Developer Portal really begins to connect the dots for me. If you can sell access to your API infrastructure using this model, you can also sell your API infrastructure to others using this model. I will have to set up some infrastructure using this approach to better flush out how AWS infrastructure and open templates like this serverless developer portal can help facilitate a more versatile, virtualized, and wholesale API lifecycle. 

There is a more detailed walkthrough of how to get going with the AWS Serverless Developer Portal, helping you think through the details. I am a big fan of these types of templates--forkable Github repositories, with a blueprint you can follow to achieve a specific API deployment, management, or any other lifecycle objective.


A Checklist For API Observability

I have had the Wikipedia page for Observability open in a browser tab for weeks now. I learned about the concept from Stripe a while back and is something that I am looking to help define APIs from an external vantage point. In this world of fake news and wild promises of artificial intelligence and machine learning, we need these black boxes to be as observable as they can--I am hoping that APIs can be one of the tools in this observability toolbox.

Stripe is approaching this concept from the inside, with a focus on stability and reliability of their API operations. I am focusing on this concept from the outside, to "measure how well internal states of a system can be inferred by knowledge of its external outputs". More of a regulatory, auditing, and journalistic way of thinking, but in the API way of doing things. Some of this is about understanding, but some of it is also about holding providers accountable for what they are peddling.

The other day I mapped out what API monitoring means to me, rebranding it as what I'd prefer to call API awareness. I went through the elements I use to understand what is going on with APIs across the sector, but this time I am thinking about them in terms of observability. Meaning, not what I'm doing to be aware of APIs, but what is expected from providers to meet (my) minimum viable definition of a platform being observable.

  • Discovery - Do APIs exist? Are they easily discoverable? Are they public? Can anyone put them to use?
     
    • Applications - Find new applications built on top of APIs.
    • People - People who are doing interesting things with APIs.
    • Organization - Any company, group, or organization working with APIs.
    • Services - API-centric services that might help solve a problem.
    • Tools - Open source tools that put APIs to work solving a problem.
  • Versions - What versions are currently in use, what new versions are available, but not used, and what future versions are planned and on the horizon.
  • Paths - What paths are available for all available APIs, and what are changes or additions to this stack of resources.
  • Schema - What schema are available as part of the request and response structure for APIs, and available as part of the underlying data model(s) being used. What are the changes?
  • SDKs - What SDKs are available for the APIs I'm monitoring. What is new. What are the changes made regarding programming and platform develop kits?

    • Repositories - What signals are available about an SDK regarding it's Github repository (ie. commits, issues, etc.)
    • Contributors - Who are the contributors.
    • Forks - The number of forks on an SDK.
  • Communication - What is the chatter going on around individual APIs, and across API communities. We need access to the latest messages from across a variety of channels.

    • Blog - The latest from each API blog.
    • Press - Any press released about APIs.
    • Twitter - The latest from Twitter regarding API providers.
    • Facebook - The latest Facebook posts from providers.
    • LinkedIn - The latest LinkedIn posts from providers.
  • Issues - What are the current issues with an API, either known by the provider or possibly also reported from within the community.
  • Change Log - What changes have occurred to an API, that might impact service or operations.
  • Road Map - What planned changes are in the road map for a platform, providing a view of what is coming down the road.
  • Monitoring - What are the general monitoring statistics for an API, outlining its overall availability.
  • Testing - What are the more detailed statistics from testing APIs, providing a more nuanced view of API availability.
  • Performance - What are the performance statistics providing a look at how performant an API is, and overall quality of service.
  • Authentication - What are all of the authentication approaches available and in-use. What updates are there regarding keys, scopes, and other maintenance elements of authentication.
  • Security - What are the security alerts, notifications, and other details that might impact the security of services, or the approach taken by a platform to making sure a platform is secure.
  • Terms of Service - What are the changes to the terms of service, or other events related to the legal operations of the platform.
  • Licensing - What licenses are in place for the API, its definitions, and any code and tooling put to use, and what are the changes to licensing.
  • Patents - Are there any patents in play that impact API operations, or possibly an entire industry or area of API operations.
  • Logging - What other logging data is available from API consumption, or other custom solutions providing other details of API operations.
  • Plans - What are the plans and pricing in existence, and what are the tiers of access--along with any changes to the plans and pricing in the near future.
  • Analysis - What tools and services are available for further monitoring, understanding, and deriving intelligence from individual APIs, as well as across collections of APIs.
  • Embeddables - What embeddable tooling are available for either working with individual APIs, or across collections of APIs, providing solutions that can be embedded within any website, or application.
  • Visualizations - What visualizations are available for making sense of any single API or collections of APIs, providing easy to understand, or perhaps more complex visualizations that bring meaning to the dashboard.
  • Integration - What integration platform as a service (iPaaS), continuous integration, and other orchestration solutions are available for helping to make sense of API operations within this world.

I do not think APIs being present automatically mean a company, it's data, content, or algorithms are observable. This list represents what I feel enables and empowers observability, and are the key areas where I think companies need to work on when it comes to ensuring their operations are more observable. I pulled this list of elements from my own toolbox for monitoring the API sector but is trimmed down specifically for platform operators who might be thinking about how they can make things more observable for everyone involved, internal or external. 

If you are like me and operating outside the corporate firewall, this list represents the areas you should be encouraging platform operators to invest in when it comes to pulling back the current a little bit more. Different folks will read this post and walk away with different takes on what observability might mean. Some will view it as a good thing, while others will see it is as API driven regulatory interference. Your take probably has more to do with your position in the conversation, than it does to the observability of anything. 

Now for some concrete examples. What is the observability of the Facebook news algorithm? What is the observability of the Twitter feed? What is the observability of our credit score? How observable is COMPAS as it evaluates its risk score for whether a criminal offender will reoffend? How observable is Chicago's food inspection algorithm? How observable is the RSA algorithm which drives cryptography? How observable is algorithmic trading across stock markets? These are just a handful of some relatable algorithms when in reality there are numerous micro impacts of algorithms felt each moment, of each day, as we navigate our worlds.

I do not think there are easy answers when it comes to algorithmic transparency. This is just my attempt to understand how APIs can be put to work making the algorithms that are increasingly governing our reality more observable. I spend a lot of time trying to make sense of the very obscure things like cloud and social. What are they? What can you do with them? Do they do what they promise? When do things change? I shared a version of this list for folks on the outside to develop API awareness. Now,, this is my recommendations for what companies, organizations, institutions, and government agencies can do to make the data, content, and algorithms they use more observable--not just for our benefit, for yours as well.


The API Definition For The Tyk API Gateway

If you are selling a service you should have an API. It is something you hear me talk about a lot here on the blog. I push on this subject because it is important, and there are numerous API service providers out there who do not have an API or choose to not make them available. In a DevOps, continuous integration world, we need the entire stack to have APIs--making our API platforms programmatic, just like the data, content, and algorithms we are making available via the APIs we are deploying.

If you need an example of this in the wild, you don't have to look much further than my partner in crime Tyk, who have a simple API for their API gateway--no matter where you deploy the gateway, you can manage it using it's APIs. The Tyk API Gateway API provides you with a base set of paths for you to manage your gateway.

An open source, lightweight, fast and scalable API Gateway. Set rate limiting, request throttling, and auto-renewing request quotas to manage how your users access your API. Tyk supports access tokens, HMAC request signing, basic authentication and OAuth 2.0 to integrate old and new services easily. Tyk can record and store detailed analytics which can be segmented by user, error, endpoint and client ID across multiple APIs and versions. Integrate your existing or new applications with Tyk using a simple REST API, Tyk even support hot-reloads so you can introduce new services without downtime.

Tyk API Management Paths Available (OpenAPI Spec)
  • /tyk/apis/ -- Get APIs [GET] - Gets a list of *API Definition* objects that are currently live on the gateway
  • /tyk/apis/ -- Create API [POST] - Create an *API Definition* object
  • /tyk/apis/{apiID} -- Delete API [DELETE] - Deletes an *API Definition* object, if it exists
  • /tyk/apis/{apiID} -- Get API [GET] - Gets an *API Definition* object, if it exists
  • /tyk/apis/{apiID} -- Update API [PUT] - Updates an *API Definition* object, if it exists
  • /tyk/health/ -- Check Health [GET] - Gets the health check values for an API if it is being recorded
  • /tyk/keys/ -- Get Keys [GET] - Gets a list of *key* IDs (will only work with non-hashed installations)
  • /tyk/keys/create -- Create Key [POST] - Create a new *API token* with the *session object* defined in the body
  • /tyk/keys/{keyId} -- Remove Key [DELETE] - Remove this *API token* from the gateway, this will completely destroy the token and metadata associated with the token and instantly stop access from being granted
  • /tyk/keys/{keyId} -- Add Custom Key [POST] - Add a pre-specified *API token* with the *session object* defined in the body, this operatin creates a custom token that dsoes not use the gateway naming convention for tokens
  • /tyk/keys/{keyId} -- Update Key [PUT] - Update an *API token* with the *session object* defined in the body, this operatin overwrites the existing object
  • /tyk/oauth/authorize-client/ -- OAuth Authorize Client [POST] - The final request from an authorising party for a redirect URI during the Tyk OAuth flow
  • /tyk/oauth/clients/create -- OAuth Create Client [POST] - Create a new OAuth client
  • /tyk/oauth/clients/{apiId} -- OAuth Get Clients [GET] - Get a list of OAuth clients bound to this back end
  • /tyk/oauth/clients/{apiId}/{clientId} -- Delete Client [DELETE] - Delete the OAuth client
  • /tyk/oauth/refresh/{keyId} -- Invalidate Key [DELETE] - Invalidate a refresh token
  • /tyk/reload/ -- Reload Gateway [GET] - Will reload the targetted gateway
  • /tyk/reload/group -- Reload Group [GET] - Will reload the cluster via the targeted gateway

The Tyk API Gateway provides a base set of API management features that can be deployed in the cloud, on-premise, or on-device. Making the key ingredients for API management programmable.

Tyk's API Gateway represents just one component in any API operation's toolbox. Tyk also provides an OpenAPI for the API gateway making things much more plug and play as part of any API life cycle--something I've made even more discoverable using APIs.json. Their approach provides a nice blueprint that all API providers should be following--well-defined APIs for all your services (open source if you can ;-).

I track on any service or tool I include in my research like it has an API. If it does have an API, I profile it like I did with Tyk. If it doesn't, I will harass them until they have an API. If you are building tools, or selling services to the API space, you should have an API, as well as provide OpenAPI and APIs.json definitions for all your goods.


I Need Your Help With My API Definition Industry Guide

I am approaching seven years doing API Evangelist. I have over 70 areas of my core API lifecycle research available on the website and have four of those areas (definitions, design, deployment, & management) that I've been publishing industry guides for the last couple of years. In 2017, I want to take those guides, and hopefully a handful of other research areas to the next level. My guides have always been about the quantity of information, over the quality of the final guide. I want to turn that on its head and focus on the quality of information and presentation over the quantity, publishing an executive summary of each of my API industry research areas.

With my new guide, I am looking to add a touch of design, but I'm also looking to expand the exposure and storytelling opportunities for my partners in the space. Using Adobe In Design I have been able to handle the design enhancements, but I am in need of help making sure my industry guide are ready for consumption by a wider, and more mainstream audience--this is where you come in. I need your feedback. Seriously, I need you to help me with everything copy editing to being an overall critic--let me know what works and what doesn't--I'm looking to make this a community affair.

My API Definition research operates as a Github repository, providing access to the data and content behind each area. I use the Github Issues for each of my API research like I would for any other project I'm managing using Github. If you have the time, I would be grateful if you would take look at my API definition guide and submit a single Github issue with your feedback,. I'm even willing to give some exposure in each edition, thanking the folks who helped out in the 'about page' of the guide. Also, if you help out enough I'd be willing to give one of the sponsor slots to your company, project, product, or service--seriously, I want this to be a community effort.

After a couple of weeks of beta testing each guide, and gathering feedback, I will be packaging the guide up for distribution through my partners, Amazon, and other distribution channels. I would really appreciate your help in fine tuning my work and making it something worthy of a more mainstream audience. I'll be repeating this process with my other industry guides for API design, deployment, and management. This will open up even further opportunities for exposure (if you are into that kind of thing). While the research portions of these guides are derived from my research, I'm looking for the stories, overall tone, and sponsor slots to be a community thing, so submit a Github issue, and let me know what you think--I really, really appreciate it.

Submit feedback to Github


An Example Of An API Service Provider Using Hypermedia

There is a growing number of hypermedia APIs available in the wild these days. However there aren't a lot of examples of hypermedia API service providers making the API lifecycle more dynamic and living. When people ask me for examples of hypermedia APIs out there I like to have a handful of URLs I can share with them, providing a diverse set they can consider as part of their own operations.

One really good example of an API service provider putting hypermedia to use is Amazon Web Services--specifically with the AWS API Gateway.  AWS describes it best in the documentation for the gateway API:

The Amazon API Gateway web service is a resource-based API that uses Hypertext Application Language (HAL). HAL provides a standard way for expressing the resources and relationships of an API as hyperlinks. Using HAL, you use HTTP methods (GET, PUT, POST, DELETE) to submit requests and receive information about the API in the response. Applications can use the information returned to explore the functionality of the API.

If you have used other common AWS APIs like EC2 or S3, then you know that they aren't the best designed APIs out there. They provide a lot of functionality but leave a lot to be desired when it comes to the actual design. The AWS API Gateway API is a well designed, highly functional API for managing the operations of your API. With each API call, you get the desired response, along with a collection of links defining what else is possible.

I wish that all tools in our API toolbox were designed like the AWS API Gateway is. In this single API call you can see how hypermedia contributes to the life cycle of any API being managed. You can manage access, and evolve into a staging or production environment, and the other possibilities available when each resource is put to work. Instead of having to go back to the API documentation to learn what options are available, they are given to you in a tailored collection of links.

I have already added the AWS API Gateway to my list of hypermedia APIs, but I will now also be referencing as a blueprint of an API service provider who is putting hypermedia to work in their API design. I think hypermedia helps make applications be more flexible and resilient, and I also think hypermedia does the same at the API level, allowing us to more honestly manage change across the API lifecycle.


A Well Thought Out API Platform

I was playing with one of the API deployment solutions that I track on, appropriately called API Platform. It is an open source PHP solution for defining, designing, and deploying your linked data APIs. I thought their list of features provided a pretty sophisticated look at what an API can be, and was something I wanted to share.

There are a couple of key elements here. API definition-driven with JSON-LD, Hydra, HAL, and OpenAPI Spec out of box. Containerized. Schema.org FTW! JWT, and OAuth. OWASP's security checklist. Postman Ready! These features make for a pretty compelling approach to designing and deploying your APIs. While I see some of these features in other platforms, it is the first with an open source solution possessing such an impressive resume. 

I'm going to take this list and add to my list of API design, and deployment building blocks in my research. These are features that other API deployment solutions should be considering as part of their offering. This approach to API deployment may not be the right answer for every type of API, but I know many data and content focused APIs thatwouldl benefit significantly from a deployment solution like API Platform.


API Definitions Influencing API Design

I was having a conversation about whether I should be putting my API definition or my API design work first--which comes earlier in the lifecycle of an API? The conclusion was to put definition first because you need a common set of definitions to work with when designing your API(s). You need definitions like HTTP and HTTP/2. In, 2017 you should be employing definitions like OpenAPI Spec, and JSON Schema. These definitions help set the tone of your API design process.

In my opinion, one of the biggest benefits of designing, developing, and operating APIs on the web has been forcing developers to pick up their heads and pay attention to what everybody else is doing and wanting. I suffer from this. Doing web APIs, providing my own, and consuming 3rd party APIs forces me to pay attention to providers and consumers outside my bubble--this is good.

Common definitions help us elevate the design of our APIs by leveraging common concepts, standards, and schema. Every time you employ ISO 8601, you have to think about folks in another time zone. Every time you use ISO 4217, you have to think about people who buy and sell their products and services in a different currency than you. When you use Schema.org, your postal addresses considers the world beyond just US zip codes, and consider a world wid web of commerce.

I am placing definitions before design in my API research listing. In reality, I think this is just a cycle. Common definitions feed my design process, and the more experienced I get with design, the more robust my toolbox of API definitions gets. Ultimately this depends on what I'm calling a definition, but for the sake of this story I am considering the building blocks of the web as the first line of definitions, then secondarily the definitions that are using OpenAPI Spec and JSON Schema as the next line of definitions. Definitions influence my design process, and the design process is refining the definitions that I am putting to work. 


Trying To Define API Awareness

I have a regular call with a really smart API person who is trying to move forward a really cool project for the API space. It is some thought provoking voodoo and I need to be able to write about it--this is how I flush out my thoughts and move forward. He is not quite ready to talk about his project publicly, so I will just talk about and explore in terms of my API Evangelist research and how it applies to the area(s) of the API space he is looking to make an impact.

This topic spans several areas of my API research, but if I had to give it a single label I would call it API awareness. When you hear me talk about my monitoring the API space, API awareness is the result. I wanted to try and communicate this from my vantage point but also share with other analysts, practitioners, and even the average individual online today. This is my attempt to distil my approach to monitoring the API space and establishing a sustained awareness of APIs at any level.

Individual ("Normals")
It may sound crazy to you, but everyone should be API aware. No, they should be paying attention to APIs like I do, or even at the level of the average individual working in the tech sector, but they should have a baseline awareness, and here is my attempt at quantifying that:

  • APIs Exist - Everyone should be aware that APIs exist, and hopefully have one or two examples of what they can do in their business or personal lives -- even if it's' just pulling tweets, photos, or getting news updates via an RSS feed.
  • API Integration - Everyone should be aware that they can move data and content between the online services they use and depend on. If you know APIs exist and are aware of services like Zapier or Datafire, you will be more successful in what you do online.
  • Data Portability - All online services should allow for the downloading of their data, allowing for the portability of all users data and content.
  • API Discovery - With a low-level awareness of APIs and what is possible, the average individual should regularly be introduced to new APIs, and be given simple tooling and services for helping them in their discovery.
    • Applications - Everyone should get exposed to what other people are doing with APIs, and be informed about new and interesting ways to get things done for fun or business.
    • Individuals - Individuals within a company, institution, or online that can help with APIs.
    • Organizations - Organizations that can help individuals with API needs.
    • Events - Meetups, conferences and other events to learn about APIs.

The more we expose the average person to APIs, the more they will be able to absorb and understand. I've turned hundreds of average, non-technical folks on to the concept of APIs, and have seen them become evangelists and even API practitioners. Some move into API focused roles, but many are just are more successful in what they are already doing, from social media work to sending out their weekly newsletter.

I've always put API awareness for individuals into the same bucket as financial awareness. You shouldn't have an awareness of the inner workings of banking and credit industry, but you should have an awareness that you have accounts, who has access to them, and that you can move money around, and have different accounts for different purposes, with different providers--the same applies to the world of APIs.

API Practioners ("Not Normals")
When I first started writing this post, I had this section broken up into three groups: 1) Provider, 2) Service Provider, and 3) Analysts. Much of it ended up being redundant, so I'm going to share the complete list of what contributes to my API awareness, and depending on where you exist in the API spectrum (gonna have to use this one more), what matters to you will vary.

This is a master dump of my research, and the approach I have used to track on in the world of APIs since 2010--an analyst 100K view. However, API providers, service providers, evangelists, and analysts should possess a similar level of awareness--maybe not at the scope I pay attention to but employing some of the same tactics, applied to a smaller group of APIs either internally or externally. Here is what I'd consider a comprehensive definition of my API awareness stack.

  • Exist - Everyone should be aware that APIs exist, and hopefully have one or two examples of what they can do in their business or personal lives--even if you are in a business unit, you should know about APIs.
  • Discovery - With a low-level awareness of APIs and what is possible, the average individual should regularly be introduced to new APIs, and be given simple tooling and services for helping them along in their discovery.
    • Applications - Find new applications built on top of APIs.
    • People - People who are doing interesting things with APIs.
    • Organization - Any company, group, or organization working with APIs.
    • Services - API-centric services that might help solve a problem.
    • Tools - Open source tools that put APIs to work solving a problem.
  • Versions - What versions are currently in use, what new versions are available, but not used, and what future versions are planned and on the horizon.
  • Paths - What paths are available for all available APIs, and what are changes or additions to this stack of resources.
  • Schema - What schema are available as part of the request and response structure for APIs, and available as part of the underlying data model(s) being used. What are the changes?
  • SDKs - What SDKs are available for the APIs I'm monitoring. What is new. What are the changes made regarding programming and platform develop kits?
    • Repositories - What signals are available about an SDK regarding it's Github repository (ie. commits, issues, etc.)
    • Contributors - Who are the contributors.
    • Stargazers - The number of stars on SDK.
    • Forks - The number of forks on an SDK.
  • Communication - What is the chatter going on around individual APIs, and across API communities. We need access to the latest messages from across a variety of channels.
    • Blog - The latest from each API blog.
    • Press - Any press released about APIs.
    • Twitter - The latest from Twitter regarding API providers.
      • Tweets - The tweets from API providers.
      • Mentions - The mentions of API providers.
      • Followers - Who is following their account.
    • Facebook - The latest Facebook posts from providers.
    • LinkedIn - The latest LinkedIn posts from providers.
    • Reddit - Any related Reddit post to API operations.
    • Stack Overflow - Any related Stack Overflow post to API operations.
    • Hacker News - Any related Hacker News post to API operations.
  • Support - What support channels are available for individual or groups of APIs, either from the provider or maybe a 3rd party individual or organization.
    • Forum / Group - What is the latest from groups dedicated to APIs.
    • Issues - What are the issues in aggregate across all relevant repositories.
  • Issues - What are the current issues with an API, either known by the provider or possibly also reported from within the community.
  • Change Log - What changes have occurred to an API, that might impact service or operations.
  • Road Map - What planned changes are in the road map for a platform, providing a view of what is coming down the road.
  • Monitoring - What are the general monitoring statistics for an API, outlining its overall availability.
  • Testing - What are the more detailed statistics from testing APIs, providing a more nuanced view of API availability.
  • Performance - What are the performance statistics providing a look at how performant an API is, and overall quality of service.
  • Authentication - What are all of the authentication approaches available and in-use. What updates are there regarding keys, scopes, and other maintenance elements of authentication.
  • Security - What are the security alerts, notifications, and other details that might impact the security of services, or the approach taken by a platform to making sure a platform is secure.
  • Terms of Service - What are the changes to the terms of service, or other events related to the legal operations of the platform.
  • Privacy - What are the privacy-related changes that would affect the privacy of end-users, developers, or anyone else impacted by operations.
  • Licensing - What licenses are in place for the API, its definitions, and any code and tooling put to use, and what are the changes to licensing.
  • Patents - Are there any patents in play that impact API operations, or possibly an entire industry or area of API operations.
  • Logging - What other logging data is available from API consumption, or other custom solutions providing other details of API operations.
  • Plans - What are the plans and pricing in existence, and what are the tiers of access--along with any changes to the plans and pricing in the near future.
  • Partners - Who are the existing platform partners, and who are the recent additions. Maybe some finer grain controls over types of partners and integrations.
  • Investments - What investments have been made in the past, and what are the latest investments and filings regarding the business and investment of APIs.
    • Crunchbase - The latest, and historical from Crunchbase.
    • Angelist - The latest, and historical from Angellist.
  • Acquisitions - What acquisitions have been made or being planned--showing historical data, as well as latest notifications.
    • Crunchbase - The latest, and historical from Crunchbase.
    • Angelist - The latest, and historical from Angellist.
  • Events - What meetups, conferences and other events are coming up that relevant APIs or topics will be present.
  • Analysis - What tools and services are available for further monitoring, understanding, and deriving intelligence from individual APIs, as well as across collections of APIs.
  • Embeddables - What embeddable tooling are available for either working with individual APIs, or across collections of APIs, providing solutions that can be embedded within any website, or application.
  • Visualizations - What visualizations are available for making sense of any single API or collections of APIs, providing easy to understand, or perhaps more complex visualizations that bring meaning to the dashboard.
  • Integration - What integration platform as a service (iPaaS), continuous integration, and other orchestration solutions are available for helping to make sense of API operations within this world.
  • Deprecation - What deprecation notices are on the horizon for APIs, applications, SDKs, and other elements of API operations.

API awareness spans many stops along the API lifecycle, and across a variety of the most common, and critical building blocks of what drives API ecosystems. Awareness doesn't come easy. It takes time, and have access to the right information, and signals, potentially across many different entities and domains--aggregating, filtering, and ranking is essential developing and strengthening your awareness. In the end, even with the same signals and information available, there will be many definitions of what is the necessary awareness.

I am glad I didn't break this into different buckets for different people. I think that is a dangerous thing for us to do. I think people should be curious and have agency the decisions regarding which signals should feed their awareness. How many APIs. Which APIs. Which companies, news, events, and other areas. Investment, patents, and other legal aspects. I don't think that all individual should be bombarded with the more complex inner workings of the API industry I pay attention to, but they should be able to make the decision to move beyond a basic level of understanding, and become an evangelist, or analyst--quantifying and developing the awareness they desire or need to achieve.

I'll stop there. This is a good first draft of what I consider API awareness. At the individual API level, across a collection or industry of APIs, and even at the analyst levels, potentially paying attention to many different APIs across a variety of different industries. Before I put this definition down, I am going to take it and apply it in two other ways: 1) Observability, "measure for how well internal states of a system can be inferred by knowledge of its external outputs", and 2) Rating, "establish a rating system that articulates where an API or provider exists on awareness and observability spectrum". We'll see where it goes. Thinking about this voodoo is helping me better organize some of the existing parts of my research, and hopefully help my friend out in his work as well.


API Lifecycle Service Providers Instead Of Walled Gardens

It is a common tactic of older software companies to offer open source, services, and tools in a way that all roads just lead into their walled garden. There are many ways to push vendor lock-in and the big software vendors from 2000 through 2010 have mastered how to route you back to their walled gardens and make sure you stay there. Web APIs have set into motion a shift in how we architect our web, mobile, and device applications, as well as providing services to the life cycle that are behind the operation of these web APIs. While this change has the potential to positive it often it can be very difficult to tell apart the newer breed of software companies from the legacy version, amidst all the hype around technology and startups.

I've been having conversations recently which are pushing me to think more about middleware, or what I'd refer to as API life cycle tooling. In my opinion, these are companies who are selling services and tools to the API life cycle, which in turn is fueling our web, mobile, device, and other applications. In my opinion, as a server provider, you should be selling to a company and API provider's life cycle needs, not your walled garden needs. I understand that you want all of a companies business, and you want to lock them into your platform, but that was how we did business 10 years ago.

The API service providers I'm shining a light on in 2017 are servicing one or many stops along the API life cycle, supporting API definitions, and providing value without getting in the way, or locking customers in. They do this on-premise, or in the cloud of your choice, and allow you to seamless overlap many different API service providers providing a variety of solutions across the API life cycle. You will notice this patterns in the companies I partner with like APIMATIC, Restlet, Tyk, and Dreamfactory. I find I have a lot more patience when it comes to the whole startup thing if your service is plug and play and us API providers can choose where and when we want to put your tools and services to use.

I want my API service providers to behave just like they recommend to their customers--modular, flexible, agile, and providing a mix of valuable API resources I can use across my own API lifecycle. You'll find me doing more highlighting of what I consider to be API life cycle service providers who bring value to the API life cycle, with API definitions like the OpenAPI Spec as the center of their operations.


Box's Seamless Approach To API Documentation

The document platform Box updated their developer efforts recently, helping push forward the definition of what API documentation can be. I've long been advocating moving APIs out from the shadow of the developer portal, and make it more seamless with any UI, kind of like CloudFlare does with their DNS dashboard. There is no reason the API should have to be hidden from users--it should be right behind the UI for everyone to discover.

Box does this. You can interact with files just like it is the regular interface. When push the get the folder items, upload file, or other option available to you in the documentation--you get example API request and response in the right-hand column. It is a blend of a regular UI, and some of the attractive and interface documentation we've seen emerge lately like ReDoc. Making it easy to see and understand what an API does, while speaking in the context of solving a relevant problem for a human.

API documentation doesn't have to be overly technical and boring. It can look like a regular user interface, and the API can be right behind the UI curtain, providing a snapshot of the requests and responses that are doing the heavy lifting behind. I'm finally seeing the movement I have wanted to see with API documentation in 2017. I'm feeling like this is going to be common theme with the world of APIs for all of us--we will never see things move as fast as we want, but eventually the world evolves, and we will see investment in the areas that make a difference on the ground at API operations, and for the consumers who are putting APIs to work in their regular world.


API Life(middleware)Cycle API

I have had a series of calls with an analyst group lately, discussing the overall API landscape in 2017. They have a lot of interesting questions about the space, and I enjoyed their level of curiosity and awareness around what is going on--it helps me think through this stuff, and (hopefully) better explain it to folks who aren't immersed in API like I am. 

This particular group is coming at it from a middleware perspective and trying to understand what APIs have done to the middleware market, and what opportunities exist (if at all). This starting point for an API conversation got me thinking about the concept of middleware in contrast to, or in relationship to what I'm seeing emerge as the services and tooling for the API life cycle.

Honestly, when I jumped on this call I Googled the term middleware to provide me with a fresh definition. Middleware: software that acts as a bridge between an operating system or database and applications, especially on a network. What does that mean in the age of API? Did API replace this? There is middleware for deploying APIs from backend systems. There is middleware for brokering, proxying and providing a gateway for APIs. Making middleware as a term pretty irrelevant. I think middle traditionally meant a bridge between backend and the frontend, where web APIs make things omnidirectional--in the middle of many different directions and outcomes.

The answer to the question of what has API done to middleware is just "added dimensions to its surface area". Where is the opportunity? "All along the API lifecycle". Middleware (aka services & tooling) is popping up through the life cycle to help design, deploy, manage, test, monitor, integrate, and numerous other stops along the API life cycle. All the features of our grandfathers API gateway are now available as a microservices buffet, allowing us to weave middleware nuggets into any system to system integrations as well as other web, mobile, and device applications. 

Middleware as a concept has been distilled down into its smallest possible unit of value, made available via an API, deployed in a virtualized environment of your choosing, on the web, on-premise, or on-device. This new approach to delivering services, and tooling is often still in the middle, but the word really doesn't do it justice anymore. I wanted to go through all the areas of my research and look for any signs of middleware or its ghost.

Some of the common areas of my research that I think fits pretty nicely with some earlier concepts of what middleware does or can do. I would say that Database, Deployment, Virtualization, Management, Documentation, Change Log, Testing, Performance, Authentication, Encryption, Security, Command Line Interface (CLI), Logging, Analysis, and Aggregation are some pretty clear targets. Of course, this is just my view of what middleware was to me, from say 1995 through 2007--after that, everything began to shift because of APIs.

As web APIs evolve the reason you'd buy or sell a tool or service to be in the middle of some meaningful action when APIs started being about Software Development Kit (SDK), Embeddable, Visualization, Webhooks, iPaaS, Orchestration, Real TimeVoiceSpreadsheets, Communication, Support, ContainersServerless,  and Bots. This is where things really begain working in many different directions, making the term middle seem antiquated to me. You are now having to think beyond just any single application, and all your middleware is now very modular, API-driven, and can be plugged in anywhere along the life cycle not just any application, but also any API -- mind blown. 

The schism in middleware for me began when companies started cracking open the SDK and were using HTTP to give access to important resources like compute with storage with AWS, and SMS with Twilio, offering a peek behind the curtain for developers. Then further expanding to regular humans with embeddable tooling, iPaaS with services like Zapier, and other services and tools that anyone can implement, no coding necessary. All of this was fueled by mobile, and the need to break down the data, content, and algorithms for use in these mobile applications. Things have quickly gone from backend to frontend, to everywhere to everywhere. How do you get in the middle of that?

Anyways. I'm guessing this story might be a little incoherent. I'm just trying to find my words for future conversations on this topic. As the regular world is confronted with this API thing, they have a lot of questions, and they need help understanding how we got here. Honestly, I feel like I don't fully have a grasp on how we got here. So writing about it helps me to think through this stuff, and polish it a little bit for the next round of conversations.


A CKAN OpenAPI Spec

I was working on publishing an index of the General Service Administration (GSA) APIs I currently have in my API monitoring system, and I remembered that I updated my Data.gov work publishing a cache of the index on Github. Part of this work I had left a note for myself about finding / creating an OpenAPI Spec for the Data.gov API, which since it is a CKAN implementation should be pretty easy--I hoped.

After Googling for a bit I found one created by the French government open data portal -- thank you!!. It looks pretty complete with 102 paths, and 79 definitions, providing a pretty nice jumpstart for anyone looking to documentation their CKAN open data implementation. 

This API definition can be used to generate API documentation using Swagger UI or ReDoc, as well as generate SDKs using APIMATIC, and monitoring using Runscope or API Science. If you come across any other API definitions for CKAN, or any interesting documentation and other tools--please let me know, I want to keep aggregating CKAN related solutions.

Open source tools that have APIs, and have open API definitions like this are the future. These are the tools that companies, institutions, organizations, and government agencies should be putting to work in their operations because it helps reduce costs, but also having an API that uses common API specifications means it will speak the same language as other important tools and services, increasing the size of the toolbox available for implementatioperations your API operatons.


Using Github As An API Index And Data Store

I am spending a lot of time studying how companies are using Github as part of their software and API development life cycle, and how the social coding platform is used. More companies like Netflix are using as part of their continuous integration workflow, something that API service providers like APIMATIC are looking to take advantage of with a new wave of services and tooling. This usage of Github goes well beyond just managing code, and are making the platform more of an engine in any continuous integration and API life cycle workflow.

I run all my API research project sites on Github. I do this because it is secure and static, as well as introduces a very potent way to not just manage a single website, but over 200 individual open data and API projects. Each one of my API research areas leverages a Github Jeykll core, providing a machine readable index of the companies, news, tools, and other building blocks I'm aggregating throughout my research.

Recently, this approach has moved beyond the core areas of my API research and is something I'm applying to my API discovery work, profiling the resources available with popular API platforms like Amazon Web Services, and across my government work like with my GSA index. Each of these projects managed using Github, providing a machine readable index of the disparate APiI, in a single APIs.json index which includes OpenAPI Specs for each of the APIs included. When complete, these indexes can provide a runtime discovery engine of APIs used as part of integrations, providing an index of single APIs, as well as potentially across many distributed APiI brought together into a single meaningful collection.

I've started pushing this approach even further with my Knight Foundation funded Adopta.Agency work, and making the Github repository not just a machine-readable index of many APIs, I'm also using the _data folder as a JSON or YAML data store, which can then also be indexed as part of the APIs.json and OpenAPI Spec for each project. I've been playing with different ways of storing and working with JSON and YAML in Jekyll on Github for a while now, but now I'm trying to develop projects that are a seamless open data store, as well as an API index, providing the best of both worlds.

This is not a model for delivering high performance and availability APIs. This is a model for publishing and sharing open data so that it is highly available, workable, and hosted on Github for FREE. Most of the data I work with is publicly available. It is part of what I believe in, and how I work on a regular basis. Making it available in a Github repo allows it to be forked, or even consumed directly while offloading bandwidth and storage costs to Github. The GET layer for all my open data project is all static, and dead simple to work with. Next, I'm working on a truly RESTfully augmented layer providing the POST, PUT, and DELETE, as well as more advanced search solutions.

I am using the Github API for this augmented layer. I am just playing with different ways to proxy it and deliver the best search results possible. The POST, PUT, PATCH, and DELETE layer for each Github repository data store in the _data folder is pretty straightforward. My goal is to offload as much of the payload to Github as possible, but then augment what it can't do when it comes to more advanced usage. I'm looking for each API index and data store can act as a forkable engine for a variety of stops along the API life cycle, as well as throughout the delivery of the web, mobile, and device-based applications we are building on top of them.


The Reasons Why We Pull Back The Curtain On Technology

Photo by Shelah

I was trying to explain to a business analyst this week the difference between SDK and API, which he said was often used interchangeably by people he worked with. In my opinion SDK and API can be the same thing, depending on how you see this layer of our web, mobile, and device connectivity. The Internet has been rapidly expanding this layer for some time now, and unless you are watching it really don't see any difference between API and SDK--it is just where the software connects everything.

For me, an SDK is where the data, content and algorithmic production behind the curtain is packaged up for you -- giving you a pre-defined look at what is possible, prepared for you with a specific language or platform in mind. Most of the hard work of understanding what is going on has been translated and crafted, providing you with a set of instructions of what you can do with this resource in your application--your integration is pretty rigidly defined, not much experimentation or hacking encouraged.

An API has many of the same characteristics as an SDK, but the curtain is pulled back on the production a little bit more. Not entirely, but you do get a little more of a look at how things work, what data and content are available, and algorithmic resources are accessible. You still get the view which a provider intends you to have, but there are fewer assumptions about what you'll do with the resources put on the interface, leaving you to do more of the heavy lifting with how these resources will get put to use.

Most of the early motivations behind choosing an open approach to web APIs over more closed and as proprietary SDK, pulling back the curtain on how we develop software, weren't entirely intentional. Companies like Flickr and Twitter weren't trying to make their mark on the politics of how we integrate software, they were busy and looking to encourage 3rd party developers to do the hard work of crafting the SDKs, and other platform integrations. The reasons for pulling back the curtain on how the sauce is made was purely about furthering their own needs, and not necessarily about moving the needle forward regarding how we talk about software integration--it was just business, enabled by tech (HTTP), and the politics came later, as a sort of side-effect.

Many traditional software developers and software-enabled hardware manufacturers have a hard time seeing this expansion in how we integrate with software and are usually still very SDK oriented, even if there are many APIs right behind their SDK curtain. They do this for a variety of technical, business, as well as political reason. It is my personal mission to help these folks understand a little more about this expansion in the software connectivity layer, and the benefits brought to the table by being more open. We need the client integration (SDK) and API to be loosely coupled from a technical, business, and political stance--to make things work in a web-enabled environment.

It isn't easy to help business folk see the importance of leaving this layer open. This is the damaging effects of the Oracle vs. Google Java Copyright, is it gums up and slows this expansion, something we need to encourage and keep open, even if the bigtechcos don't fully get it. We are going to need this momentum to not just keep the web, mobile, and device integration accessible and loosely coupled, but we will also need to help make sure the growing number of algorithms that are impacting our worlds are more observable as well. Providers aren't going to be willing to pull back the curtain on the smoke and mirrors that are AI, machine learning, and other algorithmic varietals infecting our lives.

There are many reasons why we pull back (or don't) the curtain on technology, at the application, SDK, API, or algorithmic levels. I don't count on companies, institutions, and government agencies to ever do this for the right reasons. I'm counting on them doing it for all the wrong reasons. I am looking at incentivizing their competitors to do it, helping influence policy or law to direct the systems to behave in a certain way, and encourage companies to be lazy, and keep the curtain pulled back because it's easier. Getting a peek behind the curtain, or convincing some to pull it back is never a straightforward conversation--you often have to use many of the same tactics and voodoo employed by tech providers to get what you want.


Where Are The Interesting API Bookmarklet Examples?

I have been kvetching about the quality of embeddable tooling out there, so I'm working on discovering anything interesting. I started with bookmarklets, which I think is one of the most underutilized, and simplest examples of working with APIs on the web. Here are a couple of interesting bookmarklets for APIs out there:

  • Twitter - Probably the most iconic API and bookmarklet out there -- share to Twitter.
  • Pinboard - An API-driven bookmarklet for saving bookmarks that I use every day.
  • Hypothesis - A whole suite of API-driven bookmarklets for annotating the web.
  • Socrata - A pretty cool bookmarklet for quickly viewing documentation on datasets.
  • Tin Can API - A bookmarklet for recording self-directed learning experiences.

When you search for API bookmarklets you don't get much. Nothing stands out as being innovative. I will keep looking when I have time, and I'll keep curating and understanding any new approaches, and examples, and tooling when possible.

Ultimately it just confounds me, because a simple JS bookmarklet triggering one or more API interactions is a no brainer. We have examples of this in action, making an impact on login, sharing, annotation, and more, so why don't we have more examples? IDK. It is something I'll explore as I push forward my embeddable API research.

Maybe I'm just missing something...