The API Evangelist Blog
This blog represents the thoughts I have while I'm research the world of APIs. I share what I'm working each week, and publish daily insights on a wide range of topics from design to depcration, and spanning the technology, business, and politics of APIs. All of this runs on Github, so if you see a mistake, you can either fix by submitting a pull request, or let me know by submitting a Github issue for the repository.
My Response On The Department Of Veterans Affairs (VA) RFI For The Lighthouse API Management Platform26 Oct 2017
I am working with my partners in the government API space (Skylight, 540, Agile Six) to respond to a request for information (RFI) out of the Department of Veterans Affairs (VA), for what they call the Lighthouse API Management platform. The RFI provides a pretty interesting look into the way the government agency which supports our vets is thinking about how they should be delivering government resource using APIs, but also how they play a role in the wider healthcare ecosystem. My team is meeting today to finalize our response to the RFI, and in preparation I wanted to prepare my thoughts, and in my style of doing things, involves publishing them here on API Evangelist.
You can read the whole RFI, but I’ll provide the heart of it, to help set the table for my response.
Introduction: To accelerate better and more responsive service to the Veteran, the Department of Veterans Affairs (VA) is making a deliberate shift towards becoming an Application Programming Interface (API) driven digital enterprise. A cornerstone of this effort is the setup of a strategic Open API Program that is adopting an outside-in, value-to-business driven approach to create APIs that are managed as products to be consumed by developers within and outside of VA.
Objectives: VA has started the process of establishing an API Management Platform, named Lighthouse. The purpose of Lighthouse is to establish the Next Generation Open Digital Platform for Veterans, accelerating the transformation in core domains of VA, such as Health, Benefits, Burial and Memorials. This platform will be a system for designing, developing, publishing, operating, monitoring, analyzing, iterating and optimizing VA’s API ecosystem. These APIs will allow VA to leverage its investment in various digital assets, support application rationalization, and allow it to decouple outdated systems and replace them with new, commercial, off the shelf, Software as a Service (SaaS) solutions. It will enable creation of new, high value experiences for our Veterans, VA’s provider partners, and allow VA’s employees to provide better service to Veterans.
With some insight into how they will achieve those objectives:
- Integrate more effectively with the community care providers by connecting members of a Veteran’s “Care Team” from within and outside the Veterans Health Administration (VHA); and, generate greater opportunities for collaboration across the care continuum with private sector providers,
- Effectively shift technology development to commercial electronic health record (EHR) and administrative systems vendors that can integrate modular components into the enterprise through open APIs, thus allowing VA to leverage these capabilities to adopt more efficient and effective care management processes,
- Foster an interoperable, active, innovation ecosystem of solutions and services through its Open API Framework that contributes to the next generation of healthy living and care models that are more precise, personalized, outcome-based, evidence-based, tiered and connected across the continuum of care regardless of where and how care is delivered,
- Create an open, accessible platform that can be used not only for Veterans care but also for advanced knowledge sharing, clinical decision support, technical expertise, and process interoperability with organizations through the U.S. care delivery system. By simplifying access to the largest data set of clinical data anywhere, albeit de-identified, it will accelerate the discovery and development of new clinical pathways for the benefit of the Veterans and the community at large.
They include some bullets for what they see as the road map for the Lighthouse API management platform:
- The API Gateway, through creating the facades for common, standard Health APIs will allow multiple EHRs to freely and predictably interoperate with each other as VA deploys the COTS EHR through the enterprise and finds itself with multiple EHR’s during the transition and stabilization phases of the implementation.
- Establish the Single Point of Interoperability with health exchanges and EHR systems of Community Care Providers.
- The development of APIs across the enterprise spanning additional domains following a federated development approach that can see an exponential growth in the number APIs offered by the platform.
- Operate at scale to support the entire population of Veterans, VA Employees and Community Providers that provide a variety of services to the Veterans
This is all music to my ears. These are objectives, and a road map that I can get behind. It is an RFI that reflects where all federal agencies should be going, but it also is also extra meaningful for me to see this coming out of the VA. Definitely making it something I want to support however I can. The window for responding is short, so I wanted to be able to stop what I’m doing, and give the RFI a proper response, regardless of who ends up running the platform. I’ve done type of RFI storytelling several times in the past, with the FAFSA API for Department of Education, the Recreational Information Database (RIDB) out of Department of Interior, when Commerce hired a new Chief Data Officer, and just in general support of the White House API strategy. This time is different, only because I have a team of professionals who can help me deliver beyond just the RFI.
Some Background On Me And The VA Before I get to the questions included as part of the RFI, I wanted to give some background on my relationship to the VA, and supporting veterans. First, my biological father was career military, and the father that raise me was a two tour Vietnam veteran, exposing me to the veterans administration and hospitals at an early age. He passed away in the 1990s from cancer he developed as a result of his Agent Orange exposure. All this feeds into my passion for applying APIs in this area of our society. Additionally, I used to work for the Department of Veterans affairs, as a Presidential Innovation Fellow in 2013. I didn’t stay for the entire fellowship, exiting during the government shutdown, but I have continued to work on opening up data sets, supporting VA related API conversations, and trying to keep in tune with anything that is going on within the agency. All contributing to this RFI making me pretty happy.
My Answers To The RFI Questions The VA provides 20 separate questions as part of their RFI. Shining a light on some of the thinking that is occurring within the agency. I’ve bolded their questions, and provide some of my thought inline, sharing my official responses. My team will work to combine everyones feedback, and publish a formal response to the VA’s RFI.
**1. Drawing on your experience with API platforms, how do you see it being leveraged further within the healthcare industry in such a manner as described above? What strengths and opportunities exist with such an approach in healthcare? **
An API first platform as proposed by the VA reflects what is already occurring in the wider healthcare space. With veteran healthcare spending being such a significant portion of healthcare spending in this country, the presence of an API platform like this would not just benefit the VA, veterans, veteran hospitals, and veteran service organizations (VSO), it would benefit the larger economy. A VA API platform would be the seed for a federated approach to not just consuming valuable government resources, but also deploying APIs that will benefit veterans, the VA, as well as the larger healthcare ecosystem.
Some of the strengths of this type of approach to supporting veterans via an open data and API platform, with a centralized API operations strategy would be:
- Consistency - Delivering APIs as part of a central vision, guided by a central platform, but echoed by the federated ecosystem can help provide a more consistent platform of APIs.
- Agility - An API-first approach across all legacy, as well as modern VA systems will help break the agency, and its partners into much smaller, bite-size, reusable components.
- Scope - API excel at decouple legacy systems that are often larger and more complex, delivering much smaller, reusable components that can be developed, delivered and deprecated in smaller chunks, that work at scale.
- Observability - A modern approach to delivering an API platform for government with a consistent model for API definitions, design, deployment, management, monitoring, testing, and other steps of the API lifecycle, combined with a comprehensive approach to measuring and analysis brings much needed observability to all stops along the lifecycle. Letting in the sunlight required for success at scale.
- Feedback Loops - An API platform at this scale, which the appropriate communication and support channels open up feedback loops for all stops along the lifecycle, benefitting from community, and partner input from design to deprecation. Making all VA systems
These platform strengths set the stage for some interesting and beneficial opportunities to emerge from within the platform community, but also the wider healthcare ecosystem that already exists, and is evolving, which sill step up and participate and engage with available VA resources. Here are some examples of how this can evolve based upon existing API ecosystem within and outside the healthcare ecosystem:
- Practitioner Participation - An open API platform always starts with engaging backend domain experts, allowing for the delivery of APIs that deliver access to critical resources. Second, modern API platforms help open up, and attract external domain experts, and in the case of a VA platform, this would mean more engagement with health care practitioners, further rounding off what goes into the delivery of critical veteran services.
- Healthcare Vendor Integration - Beyond practitioners, a public API platform for the VA would attract existing healthcare vendors, providing EHR, and other critical services. Vendors would be able to define, design, deploy, and integrate their API driven solutions outside the tractor beam of internal VA operations.
- Partner Programs - The establishment of formal partner program accompany the most mature API platforms available, allowing for more trusted engagement across the public and private sector. Partners will bring needed resources to the ecosystem, while benefitting from deeper access to VA resources.
- Certification - Another benefit of a mature API platform is the ability to certify partners, applications, and developers, establishing a much more trusted pool of vendors, and developers that the VA can benefit from, as well as rest of the ecosystem.
- Standardization - A centralized API platform helps aggregates common schema, API definitions, models, and blueprints that can be used across the federated public and private sector. Helping establish consistency across VA API resources, but also in the tooling, services, and other applications built on top of VA APIs.
- Data Ecosystems - A significant number of VA APIs available on the platform will be driven by valuable data sets. The more APIs, and the consistency of APIs across many internal data stewards, as well as trusted partner sources will establish a data ecosystem that will benefit veterans, the VA, as well as the rest of the ecosystem.
- Community Participation - Open API platform bring in a wide variety of developers, data scientists, healthcare practitioners, and folks who are passionate about the domain. When it comes to supporting veterans, it is essential that there is a public platform and forum for the community to engage around VA resources. Helping share the load with the community, when it comes to serving veterans.
- Marketplace - With an API platform as proposed by this RFI, a natural opportunity is with a marketplace. An established process and platform for internal groups to publish their APIs, as well as supporting SDKs, and other applications. This marketplace could also be made available to trusted partners, and certified developers, allowing them to showcased their verified applications, and be listed as a certified developer or agency. The marketplace opportunity will bring the most benefit to the VA, when it comes to delivering across many internal groups.
The benefits of an API platform such as the one proposed as part of this RFI is that they make the delivery of critical services a team effort. Domains within the VA can do what they do best, and benefit from having a central platform, and team to help them deliver consistent, reliable, scalable API access to their resources, for use across web, mobile, device, spreadsheet, analysis, and other applications. Externally, healthcare vendors, hospitals, practitioners, veterans, and the people that support them can all work together to put valuable API resources to work, helping make sense of data, deliver more modular, meaningful applications that all focus on the veteran.
The fact that this RFI exists shows that the VA is looking to keep pace with where the wider healthcare sector is headed. Four our of five of the leading EHR providers have existing API platforms. Demonstrating where the healthcare sector is headed, and something that reflects the vision established in this RFI. The strength and the opportunity with such an approach to delivering healthcare services for veterans will be fully realized when the VA becomes interoperable, and plug and play with the wider healthcare sector.
2. Describe your experience with different deployment architecture supported by MuleSoft (e.g. SaaS only, On Premise Only, Private cloud, Hybrid – Mix of SaaS and Private Cloud) and in what industry or business process it was used? Please include whether your involvement was as a prime or subcontractor, and whether the work was in the commercial or government sector.
My exposure to the Mulesoft in a production environment has been limited in recent years, especially within the last couple of years. However, during my time at the Veterans Affairs, and working in government, I was actively involved with the company, regularly engaging with their product and sales team in several areas:
- Industry Research - I was paid to conduct API industry research back in 2012 and 201 by Mulesoft, providing regular onsite presentations regarding my work, contributing to their road map.
- Messaging Partner - I worked with Mulesoft on the creation of short form and long form content around the API industry, and as part of my API industry conference.
- Sales Process - I have sat in on several sales meetings for Mulesoft with government agencies, enterprise organizations, and higher educaitonal institutions, and are familiar with what they offer.
- RAML Governance - I was part of the early discussion, formation, and feedback on the RAML API specification form, but left shortly after I left the government.
Based upon my experience working with the Mulesoft team, and through my regular monitoring of their services, tooling, and engaging with some of their clients I am confident in my ability to tailor a platform strategy that would work with their API gateway, IAM, definitions, discovery, and other solutions. I have been one of the analysts in the API sector who studies Mulesoft, and the other API management providers, and understand the strengths, and weaknesses of each of the leading vendors, as well maintain and understanding of how API management is being commoditized, standardized, and applied across all the players. I’m confident this knowledge will transfer to delivering an effective vision for the VA, involving Mulesoft solutions.
3. Describe any alternative API Management Platforms that are offered as SaaS offerings, On Premise, Private Cloud, or Hybrid. Please detail how these solutions can scale to VA’s needs managing approximately 56,000 transactions per second through connecting to VistA and multiple commercial and open source EHRs (conforming to US Core Profiles of the HL7 FHIR standards), multiple commercial Enterprise Resource Planning (ERP) systems, various home grown systems including Veterans Benefit Management Service (VBMS), Corporate Data Warehouse, and VA Time & Attendance System (VATAS), and commercial real-time analytics software packages, and open source tools enabling rapid web and mobile application development.
Monitoring and understanding API management platforms is something I’ve done since 2010 in a full time capacity. I’ve studied the evolution of the enterprise API gateway from its service oriented architecture days, into the cloud as a commodity with AWS, Google, and Azure, as well as the deployment on-premise, in hybrid scenarios, and even on-device. To support my partners, clients, and the readers of my blog, I work regularly to test drive, and understand leading API management solutions available on the market today.
While I study the proprietary and open source API gateway and management solutions out there, I also work to understand where API providers will be operating these solutions, which means having a deep understanding of how native and installed API management occurs in the cloud, hybrid, on-premise, on-device, and anywhere it needs to be in 2017.
- Multi-Vendor - API gateways have become a commodity and are baked into every one of the major cloud providers. AWS has the lead in this space, with Google, and Azure following up. We stay in tune with a multi-vendor gateway approach to be able to support, and deliver solutions within the context of how our customers and clients are already operating. To support the number of transactions the VA is currently seeing, as well as with future API implementation, a variety of approaches will be required to support the requirements of many different internal groups, as well as trusted partners.
- Multi-Cloud - As I already mentioned, always look to support multiple gateways across multiple datacenter, and cloud providers. Our goal is to understand the native opportunities on each cloud platform, the environment to deliver hybrid networks and environments, as well as how each of the leading open source and proprietary API management providers operate within a cloud environment. Making it easier to integrate with, and deliver API facades for any backend resources available within any cloud environment.
- Reusable Models - One key to successful API management in any environment is the establishment of reusable models, and the reuse of definitions, schema, and templates which can be applied consistently across groups. Any API gateway, and management solution should have an accompanying database, machine, or container image, and be driven using a machine readable API definition using OpenAPI, API Blueprint, or RAML. All data should possess an accompanying definition using JSON Schema, and leverage standards like JSON Path for mapping relationships, and establishing mappings between resources. Everything should be defined for implementation, reuse, and continuous deployment as well as integration.
- API-Driven - Every mature API management platform automates using APIs. Mulesoft, Apigee, 3Scale, AWS Gateway, all have APIs for managing the management of APIs. This allows for the seamless integration across all systems, even disparate backend legacy systems. Extending identity, service compositions, plans, rate limits, logging, analysis, and other exhaust from API management into any system it is needed. Lighthouse should be an API driven platform for defining, designing, deploying, and managing APIs the serve the VA.
In full disclosure, I’ve worked with almost every API management provider out there in some capacity. Similar to providing industry analysis to Mulesoft, I helped WSO2 define and evolve their open source API management solution. Engaged and partnered with 3Scale (now Red Hat), Apigee (Now Google), Mashery (now Tibco), and others. I’ve also engaged with NREL and their development of API Umbrella, as well as the GSA when it came to implementing in support of api.data.gov. I’m currently taking money from Tyk, 3Scale, Runscope, and Restlet–all servicing the space. It is my job to understand the API management playing field, as well as the nuts and bolts of all the leading providers.
While it is important to be able to dive deep and support specific solutions for specific project when it comes to API management, for a platform to handle the scale and scope of the Lighthouse API management platform it will have to be able to provide support for connecting to a robust set of known, and unknown backend systems. While many organizations we’ve worked with strive for a single gateway or API management solution, in reality many often operating multiple solutions, across many vendors, and multiple cloud or on-premise environments. The key to a robust, scalable platform is the ability to easily define, configure, and repeat regularly across environment, providing a consistent API surface area across implementations, groups, backend, and gateway infrastructure.
4. Describe your company’s specific experience and the strategies that you have employed for ensuring the highest level of availability and responsiveness of the platform (include information about configuration based capabilities such as multi-region deployments, dynamic routing, point of presence, flexible/multi-level caching, flexible scaling, traffic throttling etc.).
Our approach to delivering API infrastructure involves assessing scalability at every level of API management within our control. When it comes to API deployment and management we aren’t always in control over every layer of the stack, but we always work to configure, optimize, and scale whenever we possibly can. Every API will be different, and a core team will enjoy a different amount of control over backends, but we always consider the full architectural stack when ensuring availability and responsiveness across API resources, looking at eight common layers:
- Network - When possible we work to configure the network to allow for prioritization of traffic, and isolation of resources at the network level.
- Sytem - We will work with backend system owners to understand what their strengths and weaknesses are, and understand how systems are currently optimize and assess what new ways are possible as part of Lighthouse API operations.
- Database - When there is access to the database level will will assess the scalability of instances and tables in service of API deployment. If possible we will work with database groups to deploy slave implementations that can be dedicated to supporting API implementations.
- Serverless - Increasingly we are using serverless technology to help carry the load for backend systems, adding another layer for optimization behind the gateway. In some situations a serverless layer can act as a proxy, cache, and redundancy for backend systems. Opening up new approaches to availability.
- Gateway - At the gateway level there are opportunities for scaling, and delivering performance, as well as enabling caching, and rate limiting to optimize specific types of behaviors amongst consumers. Each API will have its own plan for optimization and reliability, tailored for its precise configuration and time to live (TTL).
- DNS - DNS is another layer which will play a significant role in the reliability and performance of API operations. There are numerous opportunities for routing, caching, and multi-deployment optimizations at the DNS to support API operations.
- Caching - There are multiple levels to think about caching in API infrastructure, from the backend up to DNS. The entire stack should be considered on an API by API basis, with a robust approach to knowing when to use, and where not to use.
- Regional - One of the benefits of being multi-vendor, and multi-cloud, and on-premise, is the ability to deliver API infrastructure where it is needed. Delivering in multiple geographic regions is an increasingly common reason for using the cloud, as well as allowing for flexibility in the deployment, management, and testing of APIs in any US region.
- Monitoring - One aspect of availability and reliability is monitoring infrastructure 24x7, keeping an eye on availability and performance. Sustained monitoring, across providers, and regions is essential to understanding how to ensure APIs are available and dependable.
- Analysis - Extracted from monitoring, and logging, the analysis across API operations feeds into the overall availability and reliability of APIs. Real-time analysis of API availability and performance over time is critical for dependable infrastructure.
- Partners - I’ve experience the lack of dependability of VA APIs first hand. During my time at the agency I was forced to shut down APIs I had worked on during the government shutdown. Introducing me to another dimension of API reliability, where I feel external partners can help contribute to the redundancy and availability of APIs beyond what the agency can deliver on its own.
API stability isn’t purely a technical game. There are plenty of tools at our disposal for scaling, optimizing, and injecting efficiencies into the API life cycle. However, the most important contributor to reliability is experience, and making sure you measure, analyze, and understand the needs of each API. This is where modern approaches to API management come into play, and understand how API consumers are putting resources to work, and optimizing each API to support this at which every layer possible.
5. The experiences you have and the strategies that you have employed to ensure the highest level of security for the platform. Please address policy / procedure level capabilities, capabilities that ensure security of data in transit (e.g. endpoint as well as payload), proactive threat detection etc.
API security is another dimension of the API sector I’ve been monitoring and studying on a regular basis. I have just finished a comprehensive guide to the world of API security, which will be the first guide of its kind when publish next week. I’ve been monitoring general approaches to securing infrastructure, as well as how that impacts API security. I’ve been taking what I’ve found in my research, as well as work with clients, and aggregated into 15 separate areas to consider as we develop a strategy to deliver the high levels of security.
- Encryption - Encryption in transport, on the disk, in storage, and in database.
- API Application Keys - Requiring keys for ALL API access, no matter whether internal or external.
- Identity & Access Management (IAM) - Establish roles, groups, users, policies and other IAM building blocks, healthy practices and guidance across API implementations and teams.
- Service Composition - Provide common definitions for how to organize APIs, establish and monitor API plans and usage, and use to secure and measure access to ALL API resources.
- Common Threats - Be vigilant for the most common threat from SQL injection, to DDoS, making sure all APIs don’t fall victim to the usual security suspects.
- Incident Response - Establish an incident response team, and protocol, making sure when a security incident occurs there is a standard response.
- Governance - Bake security into API governance, taking beyond just API design, and actually connecting API security to the service level agreements governing platform operations.
- DNS - Leverage the front line of API security and identifying threats at the DNS layer, and establish, and maintain a frontline of defense for API security.
- Firewall - Building on top of existing web security practices, and deploying, and maintaining a comprehensive set of rules at a firewall level.
- Automation - Implement security scanning, fuzzing, and automated approaches to crawling networks and infrastructure looking for security vulnerabilities.
- Testing & Monitoring - Dovetail API security with API testing and monitoring, identifying malformed API requests, responses, and other illnesses that can plague API operations.
- Isolation - Leveraging virtualization, serverless, and other API design and infrastructure patterns to keep API resources isolated, and minimizing any damage that can occur in a breach.
- Orchestration - Acknowledging the security risks that come with modern approaches to orchestrating API operations, continuously deploying, and integrating across the platform.
- Client Automation - Develop a strategy for managing API consumption, and identifying, understanding, and properly managing bots, and other types of client automation.
- Machine Learning - Leveraging the development and evolution of machine learning models that help look through log files, and othe threat information sources, and using artificial intelligence to respond and address evolving security concerns.
- Threat Information - Subscribing to external sources of threat information, and working with external organizations to open up the learning opportunities across platforms regarding the threats they face.
- Metrics & Analysis - Tap into the metrics and analysis that comes with API management, and build upon the awareness introduced when you consistently manage API consumption. Translating this awareness into a responsive and agile approach to delivering API security across groups, and externally with partners.
- Bug Bounties - Defining and executing public and private bug bounties to help identify security vulnerabilities early on.
- Communications - Establish a healthy approach to communicating around platform security. Openly discussing healthy practices, speaking at conferences, and participating in external working groups. While have a regular cadence to communication in normal times, as well as protocol for communication during security incidents.
- Education - Develop curriculum and other training materials around platform security, and make sure API consumers, partners, and internal groups are all getting regular doses of API security education.
Our API security experience comes from past projects, working with vendors, and studying the best practices of API leaders. Increasingly API security is a group effort, with a growing number of opportunities to work with security organizations, and major tech platforms who see the majority of threats present today. Increasing the volume of information available to integrate directly into platform security operations.
**6. Please describe your experience with all the capabilities that the platform offers and the way you have employed them to leverage existing enterprise digital assets (e.g. other integration service buses, REST APIs, SOAP services, databases, libraries) **
I have been working with databases for 30 years. I began working with web services in 2000, and have worked exclusively with web APIs in a full time capacity since 2010. I’ve worked hard to keep myself in tune with the protocols, and design patterns in use across the public and private API sectors. Here are the capabilities I’m tuned into currently.
- SOAP - Making sure we are able to support web services, and established approaches to accessing data and other resources.
- REST - Web APIs that follow RESTful patterns is ur primary focus, leveraging low-cost web infrastructure to securely making resources available.
- Hypermedia - Understanding the benefits delivered by adopting common hypermedia media types, delivering more experience focused APIs that emphasize relationships between resources.
- GraphQL - Understanding when the use of GraphQL makes sense for some data focused APIs, delivering resources to single page adn mobile applications.
- gRPC - Staying in tune with being able to deliver higher performance APIs using gRPC, augmenting the the features brought by web APIs, for use internally, and with trusted partners.
- Websockets - Leverage websockets for streaming of data in some situations, providing more real time integration for applications.
- Webhooks - Developing webhook infrastructure that makes APIs a two-way street, pushing notifications and data externally based upon events or on a schedule.
- Event Sourcing - Developing a sophisticated view of the API landscape based upon events, and identifying, orchestrating and incentivizing event-based behavior.
This reflects the core capabilities present across the API landscape. While SOAP and REST make up much of the landscape, hypermedia, GraphQL, event sourcing, and other approaches are seeing more adoption. I emphasize every platform make REST, or web APIs the central focus of the platform, keeping the bar low for API consumers, leverage web technology to reach the widest audience possible.
7. Please describe your experience and strategies that you have employed at enterprises to create Experience Services from mashup/aggregation/combination of other API’s, services, database calls etc.
I’ve been tracking on API aggregation as a discipline for over five years now, having worked with several groups to aggregate common APIs. It is an under developed layer to the web API sector, but one that has a number of proprietary, as well as open source solutions available.
I’ve personally worked on a number of API aggregation project, spanning a handful of business sectors:
- Social - Developed an open source , as well as SaaS solution for aggregating Facebook, LinkedIn, Twitter, and other social network.
- Images - Extended a systems to work across image APIs, working with Flickr, Facebook, Instagram, and other image sharing solutions.
- Wearables - Developed a strategy for aggregating of health data across a handful of the leading wearable device providers.
- Documents - Partnered with a company who provides document aggregation across Box, Google Drive, and other leading document sharing platforms.
- Real Estate - I founded a startup that did MLS data aggregation across APIs, FTP locations, and scraped targets, providing regional, and specific zip code API solutions.
- Advertising - I’ve worked with a company that aggregates advertising APIs bringing Google, Facebook, Twitter, and other platforms together into a single interface.
These are all the reasons why I was working on API aggregation, and spending time researching the subject. Here are some of the technology, and approaches I have been using to deliver on API aggregation.
- OpenAPI - I heavily use the OpenAPI specification to aggregate API definitions, and leverage JSON Schema to make connections and map API and data resources together.
- APIs.json - After working on the data.json project with the White House, inventorying open data inventory across federal agencies, I developed an open specification for indexing APIs, and building collections, and other aggregate definitions for processing at discovery or runtime.
- JSON Schema - All databases and data used as part of API operations possesses a JSON schema definition, that accompanies the OpenAPI that defines access to an API. JSON Schema provides the ability to define references across and between individual schema.
- JSON Path - XPath for JSON, enabling the ability to analyze, transform and selectively extract data from API requests and responses.
In an API utopia, everyone would use common API definitions, media types, and schema. In the real world, everyone does things their own way. API aggregation, and tools like OpenAPI, APIs.json, JSON Schema, and JSON Path allow us to standardize through machine readable definitions that connect the dots for us. API aggregation is still very much a manual process, or something offered by an existing SaaS provider, without much innovation–the tools are out there, we just need more examples we can emulate, and tooling to support.
8. Please describe your experience and strategies for Lifecycle Management of APIs for increasing productivity and enabling VA to deliver high value APIs rapidly, on-boarding app developers and commercial off the shelf applications.
I feel like this overlaps with question number 14, but is more focused on on-boarding, and client perspective of the API lifecycle. Question 14 feels like lifecycle optimization from an API provider perspective, and this is about efficiencies regarding consumption. Maybe I’m wrong, but I am a big believer in the ability of a central API portal, as well as federated portals, to increase API production, because they are delivering the resources consumers need, in a way that allows them to onboard and consume across many developers and commercial software platforms.
My experiences in fulfilling an API lifecycle from the consumer perspective always begins with a central portal, but one that posses all the required API management building blocks to not just deliver aPIs rapidly, incentivize consumption, and feedback in a way that evolves them throughout their life cycle:
- Portal - Have a central location to discover and work with all API resources, even if there is a federated network of portals that work together in concert across groups.
- Getting Started - Providing clear, frictionless on-boarding for all API consumers in a consistent way across all API resources.
- Authentication - Simple, pragmatic authentication with APIs, that are part of a larger IAM scaffolding.
- Documentation - Up to date, API definition driven, interactive API documentation that demonstrates what an API does.
- Code Resources - Code samples, libraries, and SDKs that make integration painless, and frictionless, allow applications to quickly move from development to production.
- Support - Providing direct and indirect support services that funnel back through feedback loops into the product roadmap.
- Communications - A regular drumbeat around API Platform communications, helping API consumers navigate discovery to integration, and keeps them informed regarding what is happening across the platform.
- Road Map - A clear roadmap, as we as resulting change log that keeps API consumers in tune with whats next, reducing the gap that can exist between API provider and consumer.
- Applications - A robust directory of certified applications, browser, platform, and 3rd party plugins and connectors, demonstrating what is possible via the platform.
- Analysis - Logging, measuring, and analyzing API platform traffic, sign ups, logins, and other key data points, developing an awarness of what is working when it comes to API consumption, and quickly identifying where the friction is, and eliminating with future releases.
Developer Experience (DX) is something that significantly speeds up the overall API lifecycle. Backend teams can efficiently define, deliver, and evolve their APIs without API consumers on-boarding, integrating, and providing feedback on what works, and what doesn’t work. A central API portal strategy for Liqhthouse API management is key to facilitating movement along the API lifecycle, reducing friction, eliminating road blocks, and reducing the chance an API will never full be realized in production.
Lighthouse API management is a central API platform portal, but could also be a forkable DX experience that can be emulated by other internal groups, adding federated edges to the Lighthouse API management platform. Providing a network of consistent API portals, across many different groups, which share a common on0boarding, authentication, documentation, support, communication, and analysis approach. Shifting the lifecycle beyond just a linear start to finish thing, and making something that works as a network of API nodes that all work together as a single platform.
9. Please describe your experience and strategies for establishing effective governance of the APIs.
I’ve been working with a variety of organizations around the topic of API governance for 2-3 years now, and with some recent advances I’m starting to see more API governance strategies mature, and become something that goes well beyond just API design.
API governance is something i’ve tuned into across a variety of enterprise, higher educational institutions, and government agencies. These are the areas I’m researching currently as part of my work as the API Evangelist.
- Design Guide - Establishing API design guide(s) for use across organizations.
- Deployment - Ensuring there are machine readable definitions for every deployment pattern, and they are widely shared and implemented.
- Management - Quantify what API management services, tooling, and best practices groups should be following, and putting to work in their API operations.
- Testing - Defining the API testing surface area, and provide machine readable definitions for all test patterns, while leverage OpenAPI and other definitions as a map of the landscape.
- Communication - Plan out the communication strategy as part of the API governance strategy.
- Support - What support mechanisms are in place for governance, as well as definitions for best practices for supporting indvidiaul API implementations.
- Deprecation - Define what deprecation looks like even before an API is defined or ever delivered, establishing best practices for deprecation of all resources.
- Analysis - Measuring, analyzing, and reporting on API governance. Identifying where it is being applied effectively, and where the uncharted territory lies.
- Coaches - Train and deploy API coaches who work with every group on establishing, evolving, incentivizing, and enforcing API governance across groups.
- Rating - Establishing a set of rating system for quantifying how compliant APIs are when it comes to API governance, providing an easy to understand how close an API is, or isn’t.
- Training - The development and execution of training around API governance, working with all external groups to help define API governance, and take active role in its implementation.
Sadly, API governance isn’t something I’ve seen consistently applied across the enterprise, at institutions and government agencies. There is no standard for API governance. There are few case studies when it comes to API governance to learn from. Slowly I am seeing larger enterprises share their strategies, and seeing some universities publish papers on the topic. Providing some common building blocks we can organize into a coherent API governance strategy.
10. Please describe your experience and methodologies for DevOps/RunOps processes across deployments. Highlight strategies for policy testing, versioning, debugging etc.
API management is a living entity, and we focus on delivering API operations with flat teams who have access to the entire stack, from backend to understanding application and consumer needs. All aspects of the API life cycle embraces a microservices, continuous evolution pace, with Github playing a central from from define to deprecation.
- CI/CD - Shared repositories across all definition, code, documentation, and other modules that make up the lego pieces of API operations.
- Testing & Monitoring - Ensuring every building block has a machine readable set of assertions, tests, and other artificats that can be used to verify operational integrity.
- Microservices - Distilling down everything to a service that will meet a platform objective, and benefit an API consumer in as small as package as possible.
- Serverless - Leverage functions that reflect a microservices frame of mind, allowing much smaller unit of compute to be delivered by developers.
- Versioning - Following a semantic versioning with version numbers reflecting a MAJOR.MINOR.PATCH, and kept in sync across backend, API, and client applications.
- Dependencies - Scanning and assessing for vulnerabilities in libraries, 3rd party APIs, and other ways dependencies are established across architecture.
- Security - Scanning and assessing security risk introduced by dependencies, and across code, ensuring that testing and monitoring reflects wider API security practices.
A DevOps focus is maintained across as many stops along an API life cycle, and reflected in API governance practices. However, it is also recognized that a DevOps will not always be compatible with existing legacy practices, and custom approaches might be necessary to maintain specific backend resources, until their practices can be evolved, and brought in alignment with wider practices.
11. Please describe your experience and strategies employed for analytics related to runtime management, performance monitoring, usage tracking, trend analysis.
Logging and anaysis is a fundamental component of API management, feeding an overall awareness of how API are being consumer, which contributes to the product road map, security, and overall platform reliability. The entire API stack should be analyzed from the backend, to the furthest application endpoints, whenever possible.
- Network - Logging, and anaysis at the packet and network level, understanding where the network bottlenecks are.
- Instance - Monitoring and measuring any server instance, whether it is bare metal, virtual, or containerized, providing a complete picture of how each instance in the API chain is performing.
- Database - When access to the database layer is present, adding the logging, and other diagnostic information to the analysis stack, understand how backend databases are doing their job.
- Serverless - Understanding the performance and usage of each function in the API supply chain, making sure each lego building block is accounted for.
- Regional - Understanding how API resources deploy in various regions are performing, but also adding the extra dimension of measuring and monitoring APIs from a variety of geographic regions.
- Management - Leveraging API plans at the API gateway, and understanding API consumption at the user, application, and individual resource level. Painting a picture of how APIs are performing across access tiers.
- DNS - Logging, measuring, and runtime responsiveness at the DNS layer, understanding any setting, configuration, or other detail that may be creating friction at the DNS layer.
- Client - Introducing logging, analysis, and tracking of errors at the SDK level, developing usage and performance reports from the production client vantage point.
API management has matured over the last decade to give us a standard approach to managing API access at consumption time, optimizing usage, limiting bad behavior, and incentivizing healthy behavior. API monitoring and testing practices have evolved this perspective of the health, availability, and how APIs are being used from the client perspective. All of this information gets funneled into the road map, refining API gateway plans, rate limits, and other run time systems to adjust and support desired API usage.
12. Please describe your experience with integrating MuleSoft with Enterprise Identity Management, Directory Services and implementing Role Based access.
I do not have any experience in this area with Mulesoft specifically, but have worked in general with IAM and directory solutions in other platforms, and heavy usage on AWS, for securing the API gateway’s interaction with existing backend system.
13. Please describe your experience with generating and documenting API’s and the type of standards they follow. Describe approaches taken for hosting these documentations and keeping them evergreen.
API documentation is always a default aspect of the API platform, and delivered just like code as part of DevOps and CI/CD workflows using Github. With almost every stop along the API life cycle, all documentation is API definition driven, keeping things interactive, with the following elements always in play:
- Definitions - All documentation is driven using OpenAPI, API Blueprint, or RAML, acting as the central machine readable truth of what an API does, and what schema it uses. API definition driven API documentation contributes to them always being up to date, and reflect the freshest view of any API.
- Interactive - All API documentation is API driven, allowing for the documentation to be interactive, acting as an explorer, and dynamic dashboard for playing with and understanding what an API does.
- Modular - Following a microservices approach, all APIs should have small, modular, API definitions, that drive modular, meaningful, and useful API documentation.
- Visualizations - Evolving on top of interactive documentation features, we are beginning to weave in more visual, dashboard, and reporting features directly into API document.
- Github - Github plays a central role in the platform life cycle, with all aPI definitions and documentation running within Github repository, and integrated as part of CI/CD workflows, and DevOps frame of mind.
All platform documentation is a living, breathing, element of the ecosystem. If should be versioned, evolved, and deployed along with other supporting microservice artifacts. Mulesoft has documentation to support this approach, as well as there being a suite of open source solutions we can consider to support a variety of different types of APIs.
14. Please describe your proposed complete process lifecycle for publishing high quality, high value APIs with highest speed to market.
Evolving beyond question number 8, and addressing the API provider side of the coin, I wanted to share a complete (enough) view of the life cycle from the API provider perspective. Addressing the needs of backend API teams, as well as the core API team when it comes to delivering usable, reliable, APIs that are active, and enjoy health release cycles. While not entirely a linear approach, here as many of the stops along our proposed API lifecycle, as applied to individual APIs, but applied consistently across the entire platform.
- Definitions - Every API starts as an API definition, which follows the API throughout its life, and drives every other stop along the way.
- Design - Planning, applying, evolving, and measuring how API design is applied, taking the definition, and delivering as a consistent API that complies with API governance.
- Virtualization - Delivery of mock, sandbox, and other virtualized instances of an API allowing for prototyping, testing, and playing around with an API in early stages, or throughout its life
- Deployment - Deploying APIs using a common set of deployment patterns that drives API gateway behavior, connectivity with backend systems, IAM, and other consistent elements of how APIs will be delivered.
- Management - The employment of common approaches to API management, some of which will be delivered and enforced by the API gateway, but also through other training, and sharing of best practices made available via API governance.
- Testing & Monitoring - Following DevOps, CI/CD workflows when it comes to developing tests, and other artifacts that can be used to monitor, test, and assert that APIs are meeting service level agreements, and in compliance with governance.
- Security - Considering security at every stop along the API life cycle, scanning, testing, and baking in security best practices all along the way.
- Client & SDK - Establish common approaches to generating, delivering, and deploying client solutions that help API consumer get up and running using an API.
- Discovery - Extending machine readable definitions to ensure an API is indexed and discoverable via service directories, documentation, and other catalogs of platform resources.
- Support - Establishing POC, and support channels that are available as part of any single APIs operations, ensuring it has required support that meets the minimum bar for service operations.
- Communications - Identifying any additions to an overall communication strategy around an APIs life cycle, announcing it exists on the blog, changes and releases are on the road map, and showcasing integrations via the platform Twitter account.
- Road Map - Making sure there are road map, and change log entries for an API showing where it is going, and where it has been.
- Deprecation - Lay out the deprecation strategy for an API as soon as it is born, setting expectations regarding when an API will eventually go away.
This life cycle will play out over and over for each API published on the Lighthouse API platform. It will independently be executed by API teams for each API they produce, and replayed with each major and minor release of an API. A well defined, and well traveled API life cycle helps ensure consistency across teams, helping enforce compliance, familiarity, and reusability across APIs, no matter which team is behind the facade.
15. Please describe your experience and approach towards establishing a 24x7 technical support team for the users, developers and other stakeholders of the platform.
Support is an essential building block of any successful API platform, as well as a default aspect of every single APIs individual life cycle. We break API platform support into two separate distinct categories.
API support should begin with self-service support, encouraging self-sufficiency of API consumers, and minimizing the amount of platform resources needed to support operations:
- FAQ - Publishing a list of frequently asked questions as part of getting started and API documentation, keeping it an easy to access, up to date list of the most common questions API consumers face.
- Knowledge Base - Publishing of a help directory or knowledge base of content that can help API consumers understand a platform, providing access to high level concepts, as well as API specific functionality, errors, and other aspects of integration.
- Forum - Operate an active, moderated, and inclusive forum to assist with support, providing self-service answers to API consumers, which also allows them to asynchronously get access to answers to their questions.
Beyond self-service support all API platforms should have multiple direct support channels available to API consumers:
- Monitoring - Providing a live, 3rd party status page that shows monitoring status across the platform, and individual APIs, including historical data.
- Email - Allow API consumers to email someone and receive support.
- Tickets - Leveraging a ticketing system, such as Zendesk, or other SaaS or possibly open source solution, allowing API consumers to submit private tickets, and move each ticket through a support process.
- Issues - Leveraging Github issues for supporting each component of the API lifecycle, from API definition, to API code, documentation, and every other stop. Providing relevant issues and conversation going on around each of the moving parts.
- SMS - SMS notifications regarding platform events, support tickets, platform monitoring, and other relevant areas of platform operations.
- Twitter - Providing public support via a Twitter account, responding to all comments, and routing them to the proper support channel for further engagement.
A combination of self-service and direct support channels allows for a resource starved core API team, as well as individual backend teams to manage many developers and applications, across many API resources in an efficient manner. Ensuring API consumers get the answers they are looking for, while ensuring relevant feedback and comments end up back in the product road map, with each appropriate product team.
16. Please describe your experience in establishing metrics and measures for tracking the overall value and performance of the platform.
This is somewhat overlapped with question #11, but I’d say focuses in on the heart of metrics and analysis at the API management level. Understand performance and reliability through the entire stack is critical to platform reliability, but the API management core is all about developing an awareness of how APIs are being consumed, and the value generated as part of this consumption. While performance definitely impacts value, we focus on API management to help us measure and understand consumption of each resource, and understand and measure the value delivered in consumption.
I’ve been studying how API management establishes the metrics needed, and measures and tracks API consumption, trying to understand value generation using the following elements:
- Application Keys - Every API call possess a unique key identifying the application and user behind consumption. This key is essential to understanding how value is being generated in the context of each API resources, as well as across groups of resources.
- Active Users - New, no name, automated API users are easy to attack. It is the active, identifiable, communicative users we are shooting for, so quantifying and measuring what an active user is, and how many exist on the platform is essential to understanding value generation.
- Active APIs - Which APIs are the APIs that consumers are integrating with, and using on a regular basis? Not all APIs will be a hit, and not all APIs will be transformative, but there can be utility, and functional APIs that make a significant impact. API management is key to understanding how APIs are being used, as well as how they are not being used, painting a complete picture of what is valuable, and what is not.
- Consumption - Measuring the nuance of how API consumers are consuming APIs, beyond just the number of calls being made. What is being consumed? What is frequency of consumption? What are the limitations, constraints, and metrics for consumption, and incentives for increasing consumption?
- Plans - Establishing many different access tiers and plans, containing many different APIs, allowing for API service composition to evolve create different API packages, or products that deliver across many different web, mobile, and device channels.
- Design Patterns - Which design patterns are being used, and driving consumption and value creation. Are certain types of APIs favored, or specific types of parameters favored and backed up by API consumption numbers. Measuring successful API design, deployment, management, and testing patterns, and identifying the not so successful patterns, then incorporating these findings into the road map as well as platform governance.
- Applications - Considering how metrics can be applied at the client, and SDK level, providing visibility and insight into how APIs are being consumed, from the perspective of each type of actual application being used.
API management is the future of public data, content, and other resource management. Understanding API consumption, measuring, analysing, and reporting upon API consumption is required to being able to evolve healthy API resources forward, and develop viable applications, that are putting API resources to use. API management will continue to become the tax system, the royalty system, and value generation capture mechanism across government agencies and public resources in coming years.
17. Please describe your experience and the approach you would take as the API Program Core Team to deploy an effective strategy that will allow VA to distribute the development of API’s across multiple teams and multiple contractor groups while reducing friction, risk and time to market.
A microservice, CI/CID, DevOps approach to providing and consuming APIs has begun to shift the landscape for how API platforms are supporting the API life cycle across many disparate teams, trusted external partners, and even 3rd party application and system developers. Distilling all elements of the API supply chain to modular, well defined components, helps establish a platform that can be centralized, but encouraging a consistent way to approach the federated delivery of APIs and supporting resources.
I focus on making API operations more reusable, agile, and effective in a handful of areas:
- Github - Everything starts, lives, and is branched and committed to via Github, either as a public or private repository. Definitions, code, documentation, training, and all other resources exist as repositories allowing them to be versioned, forked, collaborated around, and delivered wherever they are needed.
- Governance - Establishing API design, deployment, management, testing, monitoring, and other governance practices, then making them available across teams as living, collaborative documents and practices, and allow teams to remain independent, but working together in concert at scale, using a common set of health practices.
- Training - Establishing a training aspect to platform operations, making sure materials is readily developed to support all stops along the API lifecycle, and the curriculum is circulated, evolved, and made available across teams. Ensuring that all training remains relevant, up to date, and versioned to keep in sync with all platform changes.
- Coaching - Cultivate a team of API coaches who are embedded across teams, but also report back and worth with the central API team, helping train and educate around platform governance, but help work with and improve a teams ability to deliver within this framework. API coaches make API compliance a two way street, feeding resources downstream to teams, but then also working with teams to ensure their needs are being met, and reducing any friction that exists with delivering in a consistent way that meets the platform objectives.
- Modular - Keeping everything small, modular, and limiting the time and budget needed to define, design, deploy, manage, version, and deprecate, allowing the entire platform to move forward in concert, using smaller release cycles.
- Continuous - API governance, training, coaching, and all other operational level support across teams is also continuously being deployed, integrated, and evolved, ensuring that teams can work independently, but also be in sync when possible. Reducing bottlenecks, eliminating road blocks, and minimizing friction between teams, and across APIs that originate from many different systems.
API operations needs to work much like each individual API. It needs to be small, modular, well defined, doing one thing well, but in a way that can be harnessed at scale to deliver on platform objectives. The API lifecycle needs to be well documented, with disparate teams well educated regarding what healthy API design, deployment, management, and testing looks like. We make operations reusable, and forkable across teams by establishing machine readable models and supporting tooling that help teams be successful independently, but in alignment with the greater platform good.
18. Please describe the roles and skill sets that you would be assembling to establish the API Program core team.
Our suggested team for the Lighthouse Core API management implementation reflects how a federated, yet centralized API platform team should work. There are a handful of disparate groups, each brining a different set of skills to the table. We are always working to augment and grow this to meet each projects we work on, and open to working with 3rd party developers. Bringing forward some skills we already have on staff, while also introducing some new skills to bring a well rounded set of voices to the table to support the Lighthouse API platform.
- Architects - Backend, platform, and distributed system architects.
- Database - Variety of database administration, analysts, and scientists.
- Veterans - Making sure the veteran perspective is representative on the platform team.
- Healthcare - Opening the door to some healthcare practitioner advisor roles, making sure a diverse group of end users are represented.
- Designers - Bringing forward the required API design talent required to deliver consistent APIs at web scale.
- Developers - Provide a diverse set of developer skills supporting a variety of programming languages and platforms.
- Security - Ensure there is necessary security operations skills to pay attention to the bigger security picture.
- Testing - Provide dedicated testing and quality assurance talent to round off the set of skills on the table.
- Evangelists - Make sure there is an appropriate amount of outreach, communication, and storytelling occurring.
As the number of APIs grow, and an awareness of what consumers are desiring additional team members will be sought to address specific platform needs around emerging areas like machine learning, streaming and real time, or other aspects we haven’t addressed in this RFI.
19. How would you recommend the Government manage performance through Service Level Agreements? Do you have commercial SLAs established that can be leveraged?
Each stop along the provider dimensions, as well as the consumer perspective will be quantified, logged, measured, and analyzed as part of API management practices. As part of this process, each individual area will have it’s set of metrics and questions that support API governance objectives. These are some of the areas that will be quantified, and measured, for inclusion in service level agreements.
- Database - What are the benchmarks for the database, and what are the acceptable ranges for operation. Log, measure, and evaluate historically, establishing a common scoring for delivery of database resources.
- Server - Similar to the database layer. What are the units of compute available for API deployment and management, and what are the benchmarks for measuring the performance, availability, and reliability of each instance, no matter how big or small.
- Serverless - Establishing benchmarks for individual functions, and defining appropriate scaling, service levels for each script beyond each API.
- Gateway - Through gateway logging, measuring, and understanding performance and availability at the gateway level. Factoring in caching, rat limits, as well as backend dependencies, then establishing a level of service that can / should be met.
- APIs - Through monitoring and testing each API should have its tests, assertions, and overall level of service defined. Is the API directly meeting its service level agreement? Is an API meeting API governance requirements, as well as meeting its SLA? What is the relationship between performance and compliance?
Each stop along the API lifecycle should have its guidance when it comes to governance, and possess a set of data points for measuring monitoring, testing, and performance. All of this can be used to establish a comprehensive service level agreement definition that goes beyond just uptime and availability, ensuring that APIs are meeting the SLA, but are doing so through API governance practices that are working.
20. If the Government required offerors to build APIs as part of a technical evaluation, what test environments and tools would be required?
We’d be willing to build APIs in support of a technical evaluation. We are willing to do this using an existing API gateway solution, via any of the existing cloud platforms. We would need access to deploy mock and virtualized APIs using a gateway environment, as well as configure and work with IAM, and some sort of DNS layer, to access addressing and routing for all prototypes deployed.
Our team is capable of working in any environment, and suggest test environments and tool reflect existing operations, supporting what is already in motion. We can make recommendations, and bring our open cloud, SaaS, and open tooling to the table, and build a case for why another tool might be needed, and make more sense as part of the technical evaluation.
To facilitate the defining, and evolution of a technical evaluation we recommend starting a private Github repository, either initiated by the VA, or off error, where a set of requirements, and definitions can be established as a seed for the technical evaluation, as well as quickly become many seeds for actual platform implementation.
That was a title more work than I anticipated. I had scheduled about six hours to think through this, and go through my research to find relevant answers. When I started this response, I was planning on adding a whole other section for suggestions beyond the questions that were being asked, but I think they are able to get at more of the proposed strategy than I anticipated. There were some redundant areas of this RFI, as well as some foreshadowing regarding what would be possible with such an API, but I think the questions were able to get at what is needed.
I’m not sure how my responses will size up with my partners, or alongside the other responders to the RFI. My responses reflect what I see working across the wider public API space, and are based upon what I’ve learned opening up data, and trying to move forward other API projects across other federal agencies, all the way down to the municipal level. I’m guessing my approach will be heavier on the business and politics of API platform operations, and lighter on the reliance on the technical elements, addressing in my opinion the biggest obstacle to API success–culture, silos, and delivering consistently across disparate groups working under a common vision.
Next, I will publish this and share with my partners for inclusion in the final response for the group. I’ll make it available on API Evangelist, so that others can learn from the response to the RFI. I’m not sure where this will go next, but I can’t ever miss out on an opportunity to respond to inquiries about API operations at this level. This RFI was the most sophisticated API-focused vision to come out of VA in my experience. Regardless, of what happens next, it demonstrates to me that there are groups within the VA that understand the API potential, and what it can do for veterans, and the organizations, hospitals, and communities that support them.
When I talk to ordinary people about what I do as the API Evangelist, they tend to think APIs don’t have much of anything to do with their world. APIs exist in a realm of startups, technology, and make believe that doesn’t have much to do with their ordinary lives. When trying to make the connection with folks on airplanes, in the hotel lobby, and at the coffee shop, I always resort to the most common API-driven thing in all of our lives–the smart phone. Pulling out my iPhone is the quickest way I can go from zero to API understanding, with almost anyone.
When people ask what an API is, or how it has anything to do with them, I always pull out my iPhone, and say that all of the applications on the home page of your mobile phone use APIs to communicate. When you post something to your Facebook wall, you are using the Facebook API. When you publish an image to Instagram, you are using the Instagram API. When you check the balance on your bank account, you are using your banks API. APIs are everywhere. We are all using APIs. We are all impacted by good APIs, and bad APIs. Most of the time we just don’t know it, and are completely unaware of what is going on behind the curtain that is our smart phones.
I started paying attention to APIs in 2010 when I saw the impact mobile phones were beginning to have in our lives, and the role APIs were playing behind this new technological curtain. In 2017, I’m watching APIs expand to our homes via our thermostats, and other appliances. I’m seeing APIs in our cars. Applied to security cameras, sensors, signage, and other common objects throughout public spaces. APIs aren’t something everyone should be doing, however I feel they are something that everyone should be aware of. I usually compare it to the financial and banking system. Not everyone should understand how credit and banking systems work, but they should dam sure understanding who has access to their bank account, how much money they have, and other functional aspects of the financial system.
When it comes to API awareness I don’t expect you to be able to write code, or understand how OAuth works. However, you should know whether or not an online service you are using has an API or not, and you should understand whether or not OAuth is in use. Have you used OAuth? I’m pretty sure you have as part of your Facebook or Google logins, when you have authenticated with 3rd party applications, giving them access to your Facebook or Google data. You probably just didn’t know what you were using was OAuth, and that it was providing API access, as you clicked next, next, through the process. I’m not expecting you to understand the technical details, I am just injecting a little more awareness around the API-driven things you are already doing.
We are all using APIs. We are all being impacted by APIs existing, or not existing. We are being impacted by unsecured APIs (ie. Equifax). We are all being influenced, manipulated, and manipulated by bots who are using Twitter, Facebook, and other APIs to bombard us with information. On a daily basis we are being targeted with advertising that is personalizing, and surveilling us using APIs. We should all be aware that these things are happening, and have some ownership in understanding what is going on. I don’t expect you to become an API expert, or learn to hack, I’m just asking that you assume a little bit more accountability when it comes to understanding what goes on behind the digital curtain for the production of our online world.
I recently had the pleasure of engaging with my friend Bryan Mathers (@bryanmathers) of Visual Thinkery, and as I was telling this story, he doodled the image you see here. Giving me a digital representation of how I use my mobile phone to help people understand APIs. It gives me a visual anchor for telling this story over and over here on my blog. My readers who have heard it before can tune it out. These stories are for my new readers, and for me to link to when helping share this story with folks who are curious about APIs, and what I do. I feel it is important to help evangelize APIs, not because everybody should be doing them, it is because everyone should be aware that others are doing APIs. I’m stoked to have visual elements that I can use in this ongoing storytelling, that help connect my readers and listeners with APIs, using objects that are relevant and meaningful to their world. Thanks Bryan!!
Static website, and headless CMS approaches to providing API driven solutions have grown in popularity in recent years. Jekyll has been leading the charge when it comes to static website deployment, partly due to Github being behind the project, and their adoption for Github Pages. I’ve been pushing forward a new approach to using Jekyll as a hypermedia client to help deliver some of my API training and curriculum, and as part of this work I’m giving a talk at APISTrat next week on the concept. APIStrat is a great forum for this kind of project, helping me think through things in the form of a talk, the opportunity to share with an audience, and get immediate feedback on its viability, which I can then use to push forward my thinking on this aspect of my API work.
If you aren’t familiar with Jekyll, I recommend spending some time reading about it, and even implementing a simple site using Github Pages. I have multiple non-developers in my life who I’ve introduced to Jekyll, and it has made a significant impact on the way they do their work. Even if you do know about Jekyll, additionally I recommend spending time learning more about the underscore data folder, and the concept of Jekyll collections. Every Jekyll site has its default data collection, and the opportunity to create any additional collection, using any name you desire. Within these folders you can publish static CSV, JSON, and YAML data files, using any schema you desire. All of these data collections then become available for referencing through the static Jekyll website or application.
All of this functionality is native Jekyll. Nothing revelatory from me. What I’m looking to push things forward around is what happens when the data collections are using a hypermedia media type. I’m using Siren, which allows me to publish structured data collections, complete with links that define a specific experience across the data and content stored within collections. Next piece of the puzzle involves leveraging Jekyll and its use of Liquid as a hypermedia client, allowing me to build resilient websites and applications that are data driven, but can also evolve and change without breaking. All I do is update the static data behind, with minor revisions to the client–all using Github’s API or Git infrastructure.
I’ve found Jekyll’s data features to be infinitely beneficial, and is something I use to run my entire network of sites on. Adding the hypermedia layer is allowing me to push forward the storytelling experience around my API research and curriculum. The first dimension I find interesting, is that I can publish updates to the data for each project without breaking the client–classic hypermedia benefits. However, additionally, I can publish data and changes using Github workflows, and because every application runs as independent Github Pages driven website, it can stay in sync with updates via commits, or stay completely independent and never receiving any changes, which will also introduce another dimension of client stability, which is also so hypermedia.
I am still in the beta phase of evolving some of my API projects to use this approach. I’m pioneering this work using my API design curriculum. I’ll be publishing an instance as part ofmy session at APIStrat in Portland, OR next week. I’ll make sure I cover some of the Jekyll and Github basics for people in the audience who aren’t familiar with these areas. Then I’ll walk through how I’m using the Jekyll collections, includes, and the Liquid syntax to behave as a hypermedia client for my Siren data collections. All my talks live on Github, so if you can’t make it you can still follow along with my work. However, if you are going to be in Portland next week, make sure you come by and give a listen to my talk–I’d love to get your feedback on the concept.
Every single API project I’m working on currently has one or more business users involved, or specifically leading the work. With every business user, no matter how fearless they are, there is always a pretty heavy perception that some things are over their head. I see this over and over when it comes to API design, and the usage of OpenAPI to define an API. I’ve known a handful of folks who aren’t programmers, and have learned OpenAPI fluently, but for the most part, all business users tend to put up a barrier when it comes to learning OpenAPI–it lives in the realm of code, and exists beyond what they are capable of.
I get that folks are turned off by being exposed to code. Learning to read code takes a significant amount of time, and with the more framework, libraries, and other layers, you can find yourself pretty lost, pretty quickly. However, with OpenAPI, everything I do tends to be YAML, making it much more readable to humans. While there are rules and structure to things, I don’t feel it is out of the realm of the average user to study, learn, and eventually bring into focus. Along with the OpenAPI rules, there are a good deal of HTTP literacy required to fully understand what is going on, but I feel like the API design process is a much more forgiving environment to learn these things for both developers and business users.
I put the ownership of everything technical being complicated squarely on the shoulders of IT and developer folks. We’ve gone out of our way to make this stuff difficult to access, and out of reach of the average person. We’ve spent decades keeping people we didn’t see fit from having a seat at the table. However, increasingly I also feel like there is a significant amount of ownership to be given to business users who are willing to put up walls when there really is not reason. I think the API design layer is one place aspect of doing business with APIs where the average business user throws up unnecessary bluster around technical complexity, where it doesn’t actually exist. The definition of an API is simply a contract, no more complicated than a robust word document or spreadsheet, completely within the realm of understanding for any engaged user.
There is a pretty big opportunity to narrow the gap that exists between technical and business users with APIs. The challenge will be that many technologists want to keep things closed off to business users so they can charge for, and generate revenue when it comes to bridging the gap. We also have to figure out how to help business users better understand where the technical line is, and the importance that they become OpenAPI fluent, and have a stake in the API design conversation. Once I get back to my API training work, I’m going to think about a version of my API design curriculum, specifically for non-developer users. My goal is to figure out what it takes to onboard more business users to the world of APIs design, and make sure we balance out who is at the table when we are defining API solutions.
I’m needing to quantify some of the work that has occurred around my Human Services Data Specification work as part of a grant we received from Stanford. The grant shas helped us push forward almost three separate revisions of the API definition for working with human services data, and one of the things I’m needing to do is quantify the work that has occurred specifically around the OpenAPI definitions. At this point the specification is pretty verbose, and is now spanning multiple documents, making it difficult to quantify and share within an executive summary. To help support I wanted to craft some language that could help introduce the value that has been created to a non-technical audience.
The Human Services Data Specification (HSDS) provides the common schema for accessing, storing, and sharing of human data, providing a machine readable definition that human service practitioners can follow in their implementations. The Human Services Data API (HSDA) takes this schema, and provides an API definition for accessing, as well as adding, updating, and deleting data using a web API. While there are a growing number of code projects emerging that support HSDS/A, the center of the project is a set of OpenAPI definitions that outline the finer details of working with human services data via a web API.
With the assistance of the grant from Stanford, Open Referral was able to move forward the HSDA specification from version 1.0, to 1.1, 1.2, and now we are looking at delivering version 1.3 of the specification before end of 2017. The core OpenAPI definition for HSDA provides guidance for the design of human services APIs, with a focus on the core set of resources needed to operate a human services project. There are the handle of core resources defined as part of what is called HSDA core:
- Organizations (OpenAPI Definition) - Providing a framework for adding, updating, reading, and deleting organizational data. Describing the paths, parameters, and HSDS schema required to effectively communicate around organizations delivering human services.
- Locations (OpenAPI Definition) - Providing a framework for adding, updating, reading, and deleting location data. Describing the paths, parameters, and HSDS schema required to effectively communicate around specific locations that provide human services.
- Services (OpenAPI Definition) - Providing a framework for adding, updating, reading, and deleting services data. Describing the paths, parameters, and HSDS schema required to effectively communicate around specific human services.
- Contacts (OpenAPI Definition) - Providing a framework for adding, updating, reading, and deleting contact data. Describing the paths, parameters, and HSDS schema required to effectively communicate around contacts associated with organizations, locations, and services delivering human services.
This is considered to be the HSDA core specification, focusing on exactly what is needed to manage organizations, locations, services, and contacts involved with human services, and nothing more. As of version 1.2 of the specification we have spun off several additional specifications, which support the HSDA core specification, but are meant to reduce the complexity of the core specification. Here are seven additional projects we’ve managed to accomplish as part of our recent work:
- HSDA Search (OpenAPI Definition) - A machine readable API definition for searching across organizations, location, services, and contacts, as well as their sub-resources.
- HSDA Bulk (OpenAPI Definition) - A machine readable API definition dictating how bulk operations around organizations, location, services, and contacts can occur, minimizing the impact of core systems.
- HSDA Meta (OpenAPI Definition) - A machine readable API definition describing all the meta data in use as part of any HSDA implementation, providing a history of everything that has occurred.
- HSDA Taxonomy (OpenAPI Definition) - A machine readable API definition for managing one or more taxonomies in use, and how these taxonomies are applied to core organizations, locations, services, and contacts.
- HSDA Management (OpenAPI Definition) - A machine readable API definition for the management level details of operating an HSDA implementation, providing guidance for user access, application management, service usage, and authentication and logging for the HSDA meta system.
- HSDA Orchestration (OpenAPI Definition) - A machine readable API definition for orchestration data exchanges between HSDA implementations, allowing for schedule or event based integrations to be executed, involving one or many HSDA implementations.
- HSDA Validation (OpenAPI Definition) - A machine readable API definition for validating an HSDA implementation follows the API and data schema, providing a validation API that can be used keep any platform compliant.
The Stanford grant has allowed us to move forward HSDA three separate versions, but has also given us the time to properly break down HSDA into sensible, pragmatic services that do one thing, and does it well. The OpenAPI specifications provide a machine readable blueprint that can be used by any city, county, or other organization to define, implement, and manage their human services implementation, keeping it in sync with Open Referrals guidance. Each of the OpenAPI definitions provide a distilled down version of Open Referral guidance that anyone can follow within individual operations–designed for both business and technical users to adopt.
The HSDA specification is defined using OpenAPI, which is an open source specification format for defining web APIs, which is now part of the Linux Foundation. OpenAPI allows us to define HSDA using JSON Schema, but then also allows us to define how that schema is used as part of API requests and responses. It provides a YAML or JSON definition of the HSDA core, as well as each individual project. These definitions are then used to design, deploy, and manage individual API implementations, including providing documentation, code libraries, testing, and other essential aspects of operating an API that supports web, mobile, device, voice, spreadsheet, and other types of applications.
Hopefully this breaks down how the HSDA OpenAPI specifications are central to what Open Referral is doing. It provides the guidance that other human service providers can follow, going beyond just the data schema, and actually helping ensure access across many different implementations can work in concert. The HSDA OpenAPIs act as a set of rules that can guide individual implementations, while also ensuring their interoperability. When each human services provider adopts one of the HSDA specifications they are establishing a contract with their API consumers and application developers, promising they’ll deliver against this set of rules, ensuring their API will speak the same language as other human service APIs that also speak HSDA. This is the objective of Open Referral, and why we use OpenAPI as the central definition for everything we do.
When I first started API Evangelist I spent two days trying to create a logo. I then spent another couple days trying to find a service to create something. Unhappy with everything I produced, I resorted to what I considered a temporary logo, where I just typed a JSON representation of the logo, mimicking what a JSON response for calling a logo through an API might look like. Seven years later, the logo has stuck, resulting in me never actually invested any more energy into my logo.
The API Evangelist imagery is long overdue for an overhaul. I stopped wearing the logo on my signature black t-shirts a couple years back, and I do not want to reach the 10 year mark before I actually do anything new. My logo, and the other artwork I’ve accumulated over the last several years played their role, but I’m stoked to finally begin the process of evolving some artwork for API Evangelist. To help me move the needle I’ve began working with my friend Bryan Mathers (@BryanMMathers), where I have experienced his Visual Thinkery induced journey, where he generates image ideas by engaging in a series of conversations. Producing some whimsical, colorful, and entertaining images for anything he can imagine out of our discussions. It is something I’ve experienced as part of our parent company Contrafabulists, and stoked to experience as part of my API Evangelist work.
Bryan doodled the butterfly image while we were talking, and didn’t anticipate it would be the one I’d choose to be the logo. Honestly, at first I didn’t think it was logo quality, but after spending time looking and thinking about, I feel it suits what I’m doing very well. It still has the technical elements in the brackets in the wing outline, and the digital or pixel nature of the color in the wings, but is moving beyond the tech and represents the natural, more human side of things I would like to emphasize. The logo represents my API Evangelist ideas, and what I want them to be when I put them out there on my blog, and in my other work.
At first, a butterfly didn’t feel like Kin Lane, but I’m not creating a Kin Lane logo. I am creating an API Evangelist logo. When you think about the caterpillar to a butterfly evolution, and even the butterfly effect, it isn’t a stretch to apply all of this to what I work to do as the API Evangelist. I’m just looking to plant ideas in people’s heads with my work. I want my ideas to grow, expand, and take flight on their own. Making an impact on the world. I enjoy seeing my ideas fluttering around, adding color, adding motion, and presence around the API community. At the moment where I couldn’t imagine any image to represent API Evangelist, Bryan was able to extract a single image that I think couldn’t better represent what it is I do. #VisualThinkery
Bryan has created a horizontal, and vertical logo for me, using the butterfly. He’s also created a handful of stamps, and supporting images for my work. I will be sharing these other elements out as part of my storytelling, and see if I can find places for them to live more permanently somewhere on my network of site(s). Luckily, my website is pretty minimal, and the change of logo works well with it. I’m pretty happy with the change, and the introduction of color. Thanks Bryan for the work. I’m a little sad to retire my other logo, but it has run its course. I’ll still use it as a backdrop / background in some of my social profiles, and other work, as I think it reflects the history of API Evangelist. However, I am pretty stoked to finally have some new art for API Evangelist, that adds some new color to my work.
I am working through a project for a client, helping them deliver a portal for their API. As I do with any of my recommendations with my clients, I take my existing API research, and refine it to help craft a strategy to meets their specific needs. Each time I do this it gives me a chance to rethink some of my recommendations I’ve already gathered, as well as learn from new types of projects. I’ve taken the building blocks from my API portal, as well as my API management research, and have taken a crack at organizing them into an outline that I can use to guide my current project.
Here is a walk through of the outline I’m recommending as part of a basic API portal implementation, to support a simple public API:
- Overview - / - Everything starts with a landing page, with a simple overview of what an API does.
Then you need some basics to help make on-boarding as frictionless as possible, providing everything an API consumer needs to get going:
- Getting Started - /getting-started/ - Handful of steps with exactly what is needed to get started.
- Authentication - /authentication/ - An overview of what type of authentication is used.
- Documentation - /documentation/ - Documentation for the APIs that are available.
- FAQ - /faq/ - Answer the most common questions.
- Code - /code/ - Provide code samples, libraries, and SDKs to get going.
Then get API consumers signed up, or able to login and get at their API keys as quickly as you possibly can:
- Sign Up - /developer/ - Provide a simple sign up for a developer account.
- Login - /developer/ - Allow users to quickly log back in after they have account.
Next, provide a wealth of communication and support mechanisms, helping make sure developers are aware of what is going on:
- Blog - /blog/ - A simple blog dedicated to sharing stories about the API.
- Support - /support/ - Offer up a handful of support channels like email and tickets.
- Road Map - /road-map/ - Provide a road map of what the future will hold for the API.
- Issues - /issues/ - Share what the known issues are for the API platform.
- Change Log - /change-log/ - Publish a log of everything that has changed with the platform.
- Status - /status/ - Provide a status page showing the availability of API services.
- Security - /security/ - Publish an overview of what security practices are in place.
Make sure your consumers know how to get involved at the right level, making plans, pricing, and partnership opportunities easy to find:
- Plans - /plans/ - Offer a single page with API access plans and pricing information.
- Partners - /partners/ - Share what the partnership opportunities are, as well as existing partners.
Then let’s take care of the legal side of things, making sure API consumers are fully aware of the TOS, and other aspects of operations:
- Terms of Service - /terms-of-service/ - Make the terms of service easy to find.
- Privacy - /Privacy/ - Publish a privacy statement for all API consumers.
- Licensing - /Licensing/ - Share the licensing for API, code, data, and other assets.
I also wanted to make sure I took care of the basics for the developer accounts, flushing out the common building blocks developers will need to be successful:
- Dashboard - /developer/dashboard/ - Provide a simple, comprehensive dashboard for developers.
- Account - /developer/account/ - Allow API consumers to change their account information.
- Applications - /developer/applications/ - Allow API consumers to add multiple applications, and receive API keys for each application.
- Plans - /developer/plans/ - If there are multiple plans, allow developers to change plans.
- Usage - /developer/usage/ - Show history of all API usage for API consumers.
- Billing - /developer/billing/ - Allow API consumers to add, update, and remove billing information.
- Invoices - /developer/invoices/ - Share a listing, as well as detail of all invoices for use.
Then I had a handful of other looser items that I wanted to make sure were here. Some of these will be for the second phase of developoment, but I want to make sure is on the list:
- Branding - /branding/ - Providing a logo, branding guidelines, and requirements.
- Embeddable - /embeddable/ - Share a handful of butts and widgets for use by consumers.
- Webhooks - /Webhooks/ - Publish details on how to setup and manage webhook notifications.
- iPaaS - /ipaas/ - Information on Zapier, and other iPaaS integration solutions that are available.
- Spreadsheets - /spreadsheets/ - Share any spreadsheet plugins or connectors that are available for integration.
That concludes what I’d consider to be the essential building blocks for an API. Sure, there are some more minor APIs that can operate on a bare bones version of this list, but for any API looking to conduct business at scale, I’d recommend considering everything on this list. It reflects what I find across the leading providers like Stripe and Twilio, and reflect what I like to see from an API provider when I am getting up and running using any API.
I have Jekyll templates for each of these building blocks, which use the Bootstrap UI for the layout. I am updating it for this project, then I will publish again as a set of forkable tools that anyone can implement. I’m going to publish a new API portal that runs as an interactive outline of essential building blocks, then creates new Github repository for a user, and publishes each of the selected components to the repo. Providing a buffet of API developer portal tools anyone can put to work for their API without much work.
I wrote about being able to import an OpenAPI into the AWS API Gateway to jumpstart your API the other day. OpenAPI definitions are increasingly used for every stop along the API life cycle, and being able to import an OpenAPI to start a new API, or update an existing in your API gateway is a pretty important feature for streamlining API operations. OpenAPI is great for defining the surface area of deploying and managing your API, as well as potentially generate client SDKs, and interactive API documentation for your API developer portal.
Another important aspect of this API lifecycle is being able to get your definitions out in a machine readable format as well. All service providers should be making API definitions a two-way street, just like Amazon does with the AWS API Gateway. Using the AWS Console you can export an OpenAPI definition for any API. What makes things interesting is you can also export an OpenAPI complete with all the OpenAPI extensions they use to customize each API within the API Gateway. Also, they provide an export to OpenAPI, but with Postman specific extensions, allowing you to use use in the desktop client tooling when developing as well as integrating with any API.
I’ve said it before, and I’ll say it again. Every API service provider should allow for the importing and exporting of common API definition formats like OpenAPI and Postman. If you are selling services and tooling to API designers, developers, architects, and providers, you should ALWAYS provide a way for them to generate a static definition of what is going on–bonus, if you allow them to publish API definitions to Github. I know some API service providers like to allow for API import, but worry about customers abandoning their service if there is an export. Too bad, offer better services, and affordable pricing models, and people will stick around.
Beyond selling services to folks with day jobs, having the ability to import and export an OpenAPI allows me to play aorund with tools more. Using the AWS API Gateway I am setting up a number of APIs, then exporting them, and saving the blueprints for when I need to tell as part of a story, or use in a clients project. Having the ability to import and export an OpenAPI helps me better deliver actual APIs, but it also helps me tell better stories around what is possible with an API service or tool. If I can’t get in there and actually test drive, play with and see how things work, it is unlikely I will be telling stories about it, or have in my toolbox when I come across a project where I’ll need to apply a specific service or tool.
If you are having trouble making your API service or tool speak OpenAPI, Postman, or other formats, make sure and check out API Transformer, which provides an API you can use to convert between the format. So if you can support one format, you can support them all. They even have the ability to take a .har file and generate an OpenAPI. I recommend natively supporting the importing and exporting of OpenAPI, then using API Transformer you can also allow for the importing and exporting in other formats. When you handle the import, you’ll always just make sure it speaks OpenAPI before you ingest. When a service supports OpenAPI import and export, it is much more likely I’m going to play with more, as well as increases the chance I’m going to be baking it into the life cycle for one of the projects I’m working on for a client. I’m sure other folks are thinking along the same lines, and are grateful when they can get their API definitions in and out of your platform.
There are many reasons you want to have a road map for your API. It helps you communicate with your API community where you are going with your API. It also helps you have a plan in place for the future, which increases the chances you will be moving things forward in a predictable and stable way. When I’m reviewing and API I don’t see a public API road map available, I tend to give them a ding on the reliability and communication for their operations. One of the reasons we do APIs is to help us focus externally with our digital resources, which communication plays an important role, and when API providers aren’t communicating effectively with their community, there are almost always other issues right behind the scenes.
A road map for your API helps you plan, and think through how and what you will be releasing for the foreseeable future. Communicating this plan externally helps force you think about your road map in context of your consumers. Having a road map, and successfully communicating about it via a blog, on Twitter, and other channels helps keep your API consumers in tune with what you doing. In my opinion, an API road map is an essential building block for all API providers, because it has direct value on the health of API operations, but because it also provides an external sign of the overall health of a platform.
Beyond the direct value of having an API road map, there are other reasons for having one that will go beyond just your developer community. In a story in Search Engine Land about Google Posts, the author directly references the road map as part of their storytelling. “In version 4.0 of the API, Google noted that “you can now create Posts on Google directly through the API.” The changelog include a bunch of other features, but the Google Posts is the most notable.” Adding another significant dimension to the road map conversation, and helping out with SEO, and other API marketing efforts.
As you craft your road map you might not be thinking about the fact that people might be referencing it as part of stories about your platform. I wouldn’t sweat it too much, but you should at least make sure you are having a conversation about it with your team, and maybe add an editorial cycle to your road map publishing process. Making sure what you publish will speak to your API consumers, but also make for quotable nuggets that other folks can use when referencing what you are up to. This is all just a thought I am having as I’m monitoring the space, and reading about what other APIs are are up to. I find road maps to be a critical piece of the API communication, and support, and I depend on them to do what I do as the API Evangelist. I just wanted to let you know how important your API road map is, so you don’t forget to give it some energy on a regular basis.
My partner in crime Audrey Watters crafted a phrase that I use regularly, that “APIs reduce everything to a transaction”. She first said it jokingly a few years back, but is something I regularly repeat, and think about regularly, as I feel it profoundly describes the world I study. I like the phrase because of its dual meaning to me. If I say it with a straight face, in different company, I will get different responses. Some will be positive, and others will be negative. Which I think represents the world of APIs in a way that show how APIs are neither good, bad, or neutral.
If you are an API believer, when I describe how APIs reduce everything to a transaction, you probably see this as a positive. You are enabling something. You are distilling down aspects of our personal and business worlds down into a small enough representation, so that it can be transmitted via an API. Enabling payments, messages, likes, shares, and other aspects of our digital economy. Your work as an API believer is all about reducing things down to a transaction, so that you can make people’s lives better, and enable some kind of functionality that will deliver value online, and via our mobile phones. API transactions are enablers, and by using APIs, you are working to make the world a better place.
If you aren’t an API believer, when I describe how APIs reduce everything to a transaction, you are probably troubled, and left asking why I would want to do this. Not everything can or should be distilled down into a single transaction. Shifting something to be a transaction opens it up for being bought and sold. This is the nature of transactions. Even if you are delivering value to end-users by reducing a piece of their world to a transaction, now that it is a transaction, it is vulnerable to other market forces. It is these unintended consequences, and side effects of technology that many of us geeks do not anticipate, but market forces see as an opportunity, and benefit greatly from mindless technologists like us doing the hard work to transform the world into transactions that they can get their greedy little hands on.
Our desire to purchase a product is reduced to a transaction. Nothing bad here, right? Our conversations, images we take, videos we watch, and our location becomes a transaction. Getting a little scarier. Then our healthcare, education, and personal thoughts are turned into transactions. They are being bought and sold online, and via our mobile phones. What we buy, photos of our children, health records, and our most personal thoughts are all being reduced to a transaction so they can be transmitted via web and mobile applications. Secondarily, because it has been reduced to a transaction, it can be bought and sold on data markets, and become a transaction in the surveillance economy. Fulfilling the darker side of what we know as reducing everything to a transaction.
This is one of the reasons I’m fascinated with APIs. My technologist brain is attracted to the process of reducing things to a transaction, so they can be transmitted via APIs for use in web, mobile, and device applications. I’m obsessed with reducing everything to a transaction. Along this journey I’m also fascinated by our collective lack of awareness regarding how these transactions can be used for harm. I’m intrigued by our inability to think through this as technologists, and even become annoyed when it is pointed out. I’m fascinated by our lack of accountability as technologists when the transactions we are responsible for are used to harm people, exploit individuals, and are bought and sold on the open market like we are cattle. We seem unable to stop or slow what we have set in motion, and must keep reducing everything to a transaction at all cost.
Why do we need photos of our children to be transactions? Why do we need our DNA to be transactions? Why do we need every moment, every thought, every impulse in our day to be a digital transaction? Once anything is reduced to a transaction in our world, why do we then feel compelled to measure it? The steps we take? The calories we consume? The hours we sleep? These are all aspects of our lives that have been reduced to transactions. Why? For our benefit? To improve our life? Or is it just so that someone can sell us something, or even worse, just sell this little piece of us. This is why everything is being reduced to a transaction. The reasoning behind is always about making our lives better, and improving the quality of our daily life, but the real reason is always about distilling down a piece of who we are into something that can be bought and sold–transacted.
I am always looking for the cheapest, easiest ways to get things done in the world of APIs. As a small business owner I’m always on the hunt for hacks to get done what I need, and hopefully make things easier for my users, while keeping things free, or at least minimally priced for my business. When it comes to my simplest of APIs, where I’m not looking to fully manage, but I do want anyone using them to authenticate, and pass in API keys, so that I can track on their use. In some cases I’m going to bill against this usage, but for the most part I just want to secure, and quantify their consumption.
In some situations I just require that the appid and appkey be passed in with each API call. I do not rate limit, or bill against usage. I am just looking to identify consumers. Other projects I’m actively billing against consumption, but I’m doing this by processing the web server logs for the API. Again, bare bones operations. For the next level up, in the login PHP script I’m actually adding a user to a specific plan I’ve setup for an API using the AWS API Gateway, and adding their key to the gateway. Now a user has access to APIs, and the platform has (limited) access to their Github account, and a way to identify all their API usage.
I use this approach to account authentication for reading and writing to the underlying Github repository I’m using for API developer portals, and other applications. I’m also using these accounts to access API resources I’m serving up across a variety of projects. The combination allows for some pretty easy, but powerful engagement with API consumers via Github. The best part is I can rely on Github security and accounts for this layer of my API management. Next, I like that I can extend the features of their account using the AWS API Gateway. Extending the API management capabilities for specific projects when I have higher needs.
You can find the scripts for this on Github. If you don’t want to host the server side portion of this, I recommend checking out OAuth.io, as they have a simple SaaS version of this that will work beyond just Github. However, if you are looking for quick and dirty solution, this one should work just fine.
I’ve setup a few Lambda scripts from time to time, but haven’t had any dedicated project time to push forward API serverless concepts. Over the weekend I had a chance to deploy a couple of APIs using AWS DynamoDB, Lambda, and API Gateway, lighting up some of the serverless API possibilities in my brain. Like most areas of the tech sector, I think the term is dumb, and there is too much hype, but I think underneath there is some interesting possibilities, at least enough to keep me playing around with things.
Right now my primary API setup is Amazon Aurora (MysQL) backend, with API deployed on EC2, using Slim API framework in PHP. It is clean, simple, and gets the job done. I use 3Scale, or Github for the API management layer. This new approach simplifies some things for me, but definitely goes further down the AWS rabbit hole with the adoption of API Gateway and Lamdba, but also introduces some interesting enough benefits, that has me considering for use on some specific projects.
Identity and Access Management (IAM) Role The first thing you need to do to make the whole AWS thing work is setup a role using AWS IAM. I created a role just for this project, and added CloudWatchFullAccess, AmazonDynamoDBFullAccess, and AWSLambdaDynamoDBExecutionRole. I need this role to handle a bunch of management level things with the database, and logging. IAM is one of the missing aspects of hand crafting my APIs, and is why I am considering adopting on behalf of my customers, to help them get a handle on security.
Simple API Database Backends Using AWS DynamoDB I am a big fan of relational databases, mostly out of habit and experience. A client of mine is fluent in AWS DynamoDB, which is a simple NoSQL solution, so I felt compelled to ensure the backend database for their APIs spoke DynamoDB. It’s a pretty simple database, so I got to work creating an account table, and added a simple JSON object that contained 4 or 5 fields, and fired up an index for the simple accounts database. The databases I’m creating are meant to track aspects of API management, so the tables won’t end up being too large, or have high performance requirements, regardless, DyanamoDB is a perfect backend for APIs, leaving me unsure why I don’t use the platform more often.
Using Lambda Functions Behind The API Instead of firing up an Amazon EC2 and hand crafting my API framework, I crafted a handful of serverless scripts in Node.js that will run as independent Lambda functions. I’m going to eventually need a whole bunch of functions, but to get me going with this new API I crafted four separate Lambda functions that I can use to drive the API:
- searchAccounts - Using the DynamoDB API scan method to query the table.
- addAccount - Using the DynamoDB API putItem method to add a record to the table.
- updateAccount - Using the DynamoDB API udpateItem method to add a record to the table.
- deleteAccount - Using the DynamoDB API deleteItem method to add a record to the table.
Publish An API Using AWS API Gateway The last piece of the puzzle for this story, is the API. Each Lambda function accepts and returns JSON, which is technically an API, but there is no management layer, or RESTful infrastructure present. The AWS API Gateway gives me the ability to craft API paths, with accompanying GET, POST, PUT, DELETE, and other methods. For each method I add, I’m given four options for connecting to my backend, either via HTTP call, create just a mock API, leverage other AWS service, or connect to a Lambda function. I quickly wire up a GET, POST, PUT, and DELETE to each of my functions, and add my API to an AWS API Gateway plan, requiring API keys, and limiting who can access what.
I now have an accounts API which allows me to add, update, delete, and search for accounts using an API. My data is stored in DynamoDB, and served up via Lambda functions, through the API Gateway. It is secured. It is scalable. I can easily quantify what my database, functions, and gateway resource usage and costs will end up being. I get why folks are interested in serverless. It’s clean. It’s modular. It scales. It is very manageable. I don’t feel like it will be the answer for every API I need to deploy, but it does make sense for quickly deploying APIs for customers who are open to AWS, and need things to be secure, highly performant, and scalable.
A serverless approach definitely takes the sysadmin load off a little bit. Especially when you depend on DynamoDB for the backend. DynamoDB, Lambda, and API Gateway offer a pretty nice stack that can auto tune and scale itself. I’m going to fire up five separate APIs using this new approach, and setup some monitoring and testing to see how it delivers, and maybe get a handle on the costs associated with operating an API like this. I still need to attach a custom domain, get a handle on logging with AWS CloudWatch, and some of the other aspects of API management using AWS API Gateway. However, it provides me with a nice look into the serverless world, and how I can use it to deploy, and manage APIs, but also use APIs to manage a serverless approach by publishing functions using the Lambda API, keeping things in tune with my API definitions stored on Github.
I am learning about the AWS Marketplace through the lens of selling your API there, adding a new dimension to my API monetization and API plan research. I’ve invested a significant amount of energy to try and standardize what I learn from studying the pricing and plans for the API operations of the leading API providers. As I do this work I regularly hear from folks who like to tell me how I’ll never be able to standardize and normalize this, and that it is too big of a challenge to distill down. I agree that things seem too big to tame at the current moment, but with API pioneers like AWS, who have been doing this stuff for a while, you can begin to see the future of how this stuff will play itself out.
Amazon set into motion a significant portion of how we think about monetizing our API resources. The pay for what you use model has been popularized by Amazon, and continue to dominate conversations around how we generate revenue around our valuable digital assets. AWS has some of the most sophisticated pricing structure around their API services, as well as very mature pricing calculators, and have created markets around their resources (ie. spot instances for compute). You can see these concepts playing out in the guidance they offer software developers in their AWS Marketplace Seller Guide, which helps sellers modify their SaaS products to sell them through AWS Marketplace using two models: 1) metering, or 2) contract. When you list or application in the AWS Marketplace you must choose between one of these models, but both involve thinking critically about your monetization strategy, which includes your hard costs, as well as where the value will lie with your customers–striking the balance necessary to operate a viable API business.
According to the AWS Marketplace Seller Guide, each SaaS application owner listing through AWS Marketplace has two options for listing and billing your software:
- You can use the AWS Marketplace Metering Service to bill customers for SaaS application use. AWS Marketplace Metering Service provides a consumption monetization model in which customers are charged only for the number of resources they use in your application. The consumption model is similar to that for most AWS services. Customers pay as they go.
- You can use the AWS Contract Service to bill customers in advance for the use of your software. AWS Marketplace Contract Service provides an entitlement monetization model in which customers pay in advance for a certain amount of usage of your software. For example, you might sell your customer a certain amount of storage per month for a year, or a certain amount of end-user licenses for some amount of time.
No matter which plan you choose to deliver your API resources within, you will have to have the nuts and bolts of your API operations defined as part of your overall API monetization strategy. Each plan you offer needs to be derived from the hard costs involved with operations, but reflect the needs of your consumers. AWS gives you a handful of common dimensions for thinking through which type of plan you go with, and quantifying how you will be monetizing your API solution, in these nine areas:
- Users – One AWS customer can represent an organization with many internal users. Your SaaS application can meter for the number of users signed in or provisioned at a given hour. This category is appropriate for software in which a customer’s users connect to the software directly (for example, with customer-relationship management or business intelligence reporting).
- Hosts – Any server, node, instance, endpoint, or other part of a computing system. This category is appropriate for software that monitors or scans many customer-owned instances (for example, with performance or security monitoring). Your application can meter for the number of hosts scanned or provisioned in a given hour.
- Data – Storage or information, measured in MB, GB, or TB. This category is appropriate for software that manages stored data or processes data in batches. Your application can meter for the amount of data processed in a given hour or how much data is stored in a given hour.
- Bandwidth – Your application can bill customers for an allocation of bandwidth that your application provides, measured in Mbps or Gbps. This category is appropriate for content distribution or network interfaces. Your application can meter for the amount of bandwidth provisioned for a given hour or the highest amount of bandwidth consumed in a given hour.
- Request – Your application can bill customers for the number of requests they make. This category is appropriate for query-based or API-based solutions. Your application can meter for the number of requests made in a given hour.
- Tiers – Your application can bill customers for a bundle of features or for providing a suite of dimensions below a certain threshold. This is sometimes referred to as a feature pack. For example, you can bundle multiple features into a single tier of service, such as up to 30 days of data retention, 100 GB of storage, and 50 users. Any usage below this threshold is assigned a lower price as the standard tier. Any usage above this threshold is charged a higher price as the professional tier. Tier is always represented as an amount of time within the tier. This category is appropriate for products with multiple dimensions or support components. Your application should meter for the current quantity of usage in the given tier. This could be a single metering record (1) for the currently selected tier or feature pack.
- Units – Whereas each of the above is designed to be specific, the dimension of Unit is intended to be generic to permit greater flexibility in how you price your software. For example, an IoT product which integrates with device sensors can interpret dimension “Units” as “sensors”. Your application can also use units to make multiple dimensions available in a single product. For example, you could price by data and by hosts using Units as your dimension. With dimensions, any software product priced through the use of the Metering Service must specify either a single dimension or define up to eight dimensions, each with their own price.
These dimensions reflect the majority of software services being sold out there today. Make sure you not get stuck thinking about one way of thinking, like just paying per API call. Think about how your different API plans might have one or more dimensions, beyond any single use case.
- Single Dimension - This is the simplest pricing option. Customers pay a single price per resource unit per hour, regardless of size or volume (for example, $0.014 per user per hour, or $0.070 per host per hour).
- Multiple Dimensions – Use this pricing option for resources that vary by size or capacity. For example, for host monitoring, a different price could be set depending on the size of the host. Or, for user-based pricing, a different price could be set based on the type of user (admin, power user, and read-only user). Your service can be priced on up to eight dimensions. If you are using tier-based pricing, you should use one dimension for each tier.
Alll of these dimensions reflect the common building blocks of API plans and pricing which I’ve been tracking on for a number of years. It’s based upon Amazon selling their own APIs, as well as watching their customers price and sell their resources. Their pricing guide goes well beyond just APIs, and consider how you can generate revenue from any type of SaaS, but the dimensions they provide a place to start for ALL API providers, whether you are looking to sell them in the AWS Marketplace or not. You can find even more dimensions on my API plan research, but what Amazon provides will work for about 75% of the use cases out there today, and I’m looking to get you thinking critically your API monetization and plans, not provide you with too many options.
There just aren’t too many examples like this available, when it comes to thinking through how to price your APIs. My friends over at Algorithmia have pushed the conversation forward some with their algorithmic API marketplace, but you just don’t see this level of maturity with the pricing of resources over at Azure, Google, or others yet. Amazon is the furthest along in this journey. They have the most experience, and the most data regarding what digital resources are worth, and how you can measure and meter access. I think it will take a number of years to mature, but I think by 2020 we will see more standardization in how we structure the pricing for the most common digital resources available online–even if it is just the APIs we are selling on Amazon.
There will always be an infinite number of ways to charge for your APIs, but for many of the digital commodities that have become staples, we’ll see one or two common approaches stick. We’ll see less innovation in how we price the most used APIs, because those with market share will dictate the model that gets adopted, and others will emulate just so they can get a piece of the pie. As other API resources continue to mature, becoming digital commodities, we’ll see their pricing structure stabilize, and standardize to fit into market frameworks like we see emerging on the AWS platform. It will take time, but we’ll begin to see machine readable templates governing API pricing and plans, allowing cross platform markets to flourish, as API consumers figure out they can make API usage more predictable, budget-able, and orchestrate-able. We aren’t there yet, but you can see hints of this API economy over at AWS within their marketplace approach.
I’m helping some clients think through their approach to API management. These projects have different needs, as well as different resources available to them, so I’m looking to distill things down to the essential components needed to get the job done. I’ve taken a look at the API consumer account basics as well as their usage, and next I want to consider the view of all of this from the API provider vantage point. For both of my current projects, I’m needing to think about the UI elements that deliver on API management elements from the API provider perspective.
To help me think though the UI elements needed for helping manage the essential elements of managing APIs I wanted to create a simple list of each screen that will be needed to get the job done. So far, I have the following X UI elements as part of my API management base:
- Creation - The landing page for account creation. Ideally, these are using OpenID / OAuth for major providers, eliminating the need for passwords.
- Login - The page for logging back in after a user has registered. Ideally, these are using OpenID / OAuth for major providers, eliminating the need for passwords.
- Dashboard - The landing page once an API provider is logged in – providing access to all aspects of API management.
- APIs - A list of APIs, with detail pages for managing each individual API definition.
- Plans - A list of API plans, with detail pages for managing each individual API plan definition.
- Accounts - A list of API consumer accounts, with detail pages for managing each individual API consumer.
- Usage - - A list of API calls from logs, with tools for breaking down by API, plan, or consumer.
- Invoices - A list of all invoices that have been generated for a specific time period, across specific consumers. With a detail page for seeing individual invoice details.
There may be more of these added to my current projects. Things like forgot password, and other aspects, but this gives me this visibility into the API consumer account, and usage aspects of the API management I’m trying deliver. These are the basic features any API management provider delivers, and represent the default areas of control an API provider should have over API consumption. It doesn’t have to be pretty, and have all the bells and whistles, but it should give enough to help API providers understand what is going on in real-time, and over a variety of time periods.
My clients I refer to 3Scale, Tyk, Restlet and the other API management services that are available won’t have to reinvent the wheel here. It is my clients who are using AWS API Gateway, and developing seamless custom integrations I’m having to specify things at this level. My advice is to make use of existing providers whenever you can, but for this round of projects I’m putting AWS API Gateway to work, so I’m going to need to do a little more custom work on the API management front. When I come out of this I should have a suite of API definitions that can be imported into the AWS API Gateway, as well as some Jekyll page templates for delivering the UI functionality listed above. Augmenting the AWS API Gateway with essential API management features that should be present within any API operations.
I am getting intimate with AWS API Gateway. Learning about what it does, and what it doesn’t do. The gateway brings a number of essential API management elements to the table, like issuing keys, establishing plans, and enforcing rate limits. However, it also lacks many of the other active elements of API management like billing for usage, which is an important aspect of managing API consumption for API providers. With AWS, things tend to work like legos, meaning many of their services work together to deliver a larger picture, and I’ve been learning more about how AWS API Gateway works with the AWS Marketplace to deliver some of the business of API features I’m looking for.
Here is the blurb from the AWS API Gateway documentation regarding how you can setup AWS API Gateway to work with AWS Marketplace, making your API available for sale as a SaaS service:
After you build, test, and deploy your API, you can package it in an API Gateway usage plan and sell the plan as a Software as a Service (SaaS) product through AWS Marketplace. API buyers subscribing to your product offering are billed by AWS Marketplace based on the number of requests made to the usage plan.
To sell your API on AWS Marketplace, you must set up the sales channel to integrate AWS Marketplace with API Gateway. Generally speaking, this involves listing your product on AWS Marketplace, setting up an IAM role with appropriate policies to allow API Gateway to send usage metrics to AWS Marketplace, associating an AWS Marketplace product with an API Gateway usage plan, and associating an AWS Marketplace buyer with an API Gateway API key. Details are discussed in the following sections.
To enable your customers to buy your product on AWS Marketplace, you must register your developer portal (an external application) with AWS Marketplace. The developer portal must handle the subscription requests that are redirected from the AWS Marketplace console.
While it is not exactly the billing model I’m looking for in my API management layer currently, it provides a compelling look at selling your APIs in a marketplace setting. There are a growing number of APIs available in the AWS marketplace, but still less than a hundred from what I can tell. Smells like an opportunity for API providers to step up and publish their APIs. The linkage between AWS API Gateway usage plan and AWS Marketplace product makes a lot of sense when you think about having an APIs as a product focus for your organization. I will be writing more about this relationship in the future, as I think it reflects how we should be thinking about our API service composition, and how we craft our API usage plans.
I’m going to setup one of my simple, more utility style APIs, like screen capture, or proper noun search, and deploy using AWS API Gateway, then publish as an AWS Marketplace product. I want to have a clear view of what it takes, and what the on-boarding process looks like for a consumer. Another thing I’m interested in doing, is how can I also offer a more wholesale version of these APIs. Meaning, I’m not metering their usage with AWS API Gateway and billing via AWS Marketplace. It will be a separate product that gets deployed in their AWS infrastructure as an EC2, RDS, and maybe AWS API Gateway, where they aren’t billed by usage, and billed by the implementation. This is a model I’ve been watching, considering, and thinking about for some time, but only now getting to where the public cloud infrastructure is able to support this type of API deployment and management. It will be interesting to see what I can make work.
I’m helping some clients think through their approach to API management. These projects have different needs, as well as different resources available to them, so I’m looking to distill things down to the essential components needed to get the job done. I’ve taken a look at the API consumer account basics as well as their usage, and next I want to consider the view of all of this from the API provider vantage point. For both of my current projects, I’m needing to think about the UI elements that deliver on API management elements from both the API provider and consumer levels. I’ve already tackled the API provider view, next up is the API consumer view.
To help me think though the UI elements needed for helping manage the essential elements of managing API consumption for developers I wanted to create a simple list of each screen that will be needed to get the job done. So far, I have the following X UI elements as part of my API management base:
- Creation - The landing page for developer account creation. Ideally, these are using OpenID / OAuth for major providers, eliminating the need for passwords.
- Login - The page for logging back in after a developer has registered. Ideally, these are using OpenID / OAuth for major providers, eliminating the need for passwords.
- Dashboard - The landing page once an API consumer is logged in – providing access to all aspects of their API access.
- Account - The ability to update developer account information.
- Key(s) - The ability to get the master set, or multiple copies of API keys.
- Plans - Viewing all available plans, with the ability to see which plan a consumer is part of and switch plans if relevant.
- Usage - See history of all API consumption by API and time period.
- Credit Card - The addition and updating of account credit card.
- Billing - See history of all invoices for API consumption.
There may be more of these added to my current projects. Things like forgot password, and other aspects, but this gives me this visibility into the API consumer account, and usage aspects of the API management I’m trying deliver. These are the basic features any API management provider delivers, and represent the default areas of control an API provider should have over API consumption. It doesn’t have to be pretty, and have all the bells and whistles, but it should give each API consumer what they need to manage their access, consumption, and understand what they’ve consumed and been billed for.
My clients I refer to 3Scale, Tyk, Restlet, and the other API management services that are available won’t have to reinvent the wheel here. It is my clients who are using AWS API Gateway, and developing seamless custom integrations which I’m having to specify things at this level. My advice is to make use of existing providers whenever you can, but for this round of projects I’m putting AWS API Gateway to work, so I’m going to need to do a little more custom work on the API management front. When I come out of this I should have a suite of API definitions that can be imported into the AWS API Gateway, as well as some Jekyll page templates for delivering the UI functionality listed above. Augmenting the AWS API Gateway with essential API management features that should be present within any coherent API operations.
I’m helping some clients think through their approach to API management. These projects have different needs, as well as different resources available to them, so I’m looking to distill things down to the essential components needed to get the job done. I spent some time thinking through the developer account basics, and now I want to break down the aspects of API consumption and usage around these APIs and developer accounts. I want to to think about the moving parts of how we measure, quantify, communicate, and invoice as part of the API management process.
Having A Plan We have developers signed up, with API keys that they’ll be required to pass in with each API call they make. The next portion of API management I want to map out for my clients is the understanding and management of how API consumers are using resources. One important concept that I find many API providers, and would be API providers don’t fully grasp, is service composition. Something that requires the presence of a variety of access tiers, or API plans, which define the business contract we are making with each API providers. API plans usually have these basic elements:
- plan id - A unique id for the plan.
- plan name - A name for the plan.
- plan description - A description for the plan.
- plan limits - Details about limits of the plan.
- plan timeframe - The timeframe for limits applied.
There can be more than one plan, and each plan can provide different types of access to different APIs. There might be plans for new users versus verified ones, as well as possibly partners. The why and how of each plan will vary from API provider to provider, but their are all essential to managing API usage. Something that needs to be well defined, in place, with APIs and consumers organized into their respective tiers. Once this is done, we can begin thinking about the next layer, logging.
Everything Is Logged Each call made to the API contains API key which identify the consumer, who should always be operating within a specific plan. Each API call is being logged as part of the web server, and ideally the API management layer, providing timestamped entries for every drop of APIs consumed. These logs are used across API management operations, providing a history of all usage that will be evaluated on a per API, as well as per API consumer level. If a request and response isn’t available in the API logs, then it didn’t happen.
Quantifying API Usage Every log entry recorded will have the keys for a specific API consumer, as well as the path of the API they are consuming. When this usage is viewed through the lens of the API plan an API consumer is operating within, you have the ability to quantify usage, and place a value on overall consumption by any time frame. This is the primary objective of API management, quantifying who is accessing API resources, and how much they are consuming. This is how valuable API resources are being secured, and in many situations monetized, using simple web APIs.
API usage begins with quantifying what consumers have used, but then there are other dimensions that should be considered as well. For example, usage across API users for a single path, or group of paths. Also possibly usage across APIs and consumers within a particular plan. Then you can begin to look across time periods, regions, and other relevant information, providing a more complete picture of how APIs are being put to use. This awareness is why API providers are employing API management techniques, beyond what can be extracted from database logs, or just website or application analytics–providing a more wholesale view of how APIs are consumed.
Invoicing For Consumption Now that we have all API usage defined, and available through the lens of API plans, we have the option of invoicing for consumption. We know each call API consumers have made, and we have a rate specific for each API as part of the API plans each consumer is subscribed to. All we have to do is the math, and generate an invoice for the designated time period (ie. daily, weekly, monthly, quarterly). Invoicing doesn’t have to always be settled with cash, as consumption may be internally, with partners, and with a variety of public consumers. I’d also warn against thinking of consumption always costing the consumer, and sometimes it might be relevant to pay API some consumers for some of their usage–incentivizing a particular type of behavior that benefits a platform.
Measuring, quantifying, and accounting for API consumption is the primary reason companies, organizations, institutions, and government agencies are implementing API management layers. It is how Amazon generates revenue from their Amazon Web Services. It is how Stripe, Twilio, Sendgrid, and other rockstar API providers generate the revenue behind their operations. It is how government agencies are understanding how the public is putting valuable public resources to work. API management is something that is baked into cloud platforms, with a variety of service providers available as well, providing a wealth of solutions for API providers to choose from.
Next I will be looking at the API account and usage layers of API management from the view of API provider, as well as the API consumer via their developer area. Ideally, API management solutions are providing the dashboards needed for both sides of this equation, but in some of the projects I’m working on this isn’t available. There is no ready to go dashboard for API providers or consumers to look at when standing up an AWS API Gateway in front of an API, falling short when it comes to going the distance as an API management solution. You can define accounts, issue keys, establish plans and limit manage API consumption, but we need AWS CloudFront, and other services to deliver on API logging, authentication, and other common aspect of management–with API management dashboards being a custom thing when employing AWS for management. One consideration for why you might go with a more robust API management solution, beyond what an API gateway offers.
I’m helping some clients think through their approach to API management. These projects have different needs, as well as different resources available to them, so I’m looking to distill things down to the essential components needed to get the job done. The first element you need to manage API access is the ability for API consumers to be able to sign up for an account, that will be used to identify, measure usage, and engage with each API consumer.
Starts With An Account While each company may have more details associated with each account, each account will have these basics:
- account id - A unique identifier for each API account.
- name - A first name and last name, or organization name.
- email - A valid email address to communicate with each user.
Depending on how we enable account creation and login, there might also be a password. However, if we use existing OpenID or OAuth implementations, like Github, Twitter, Google, or Facebook, this won’t be needed. We are relying on these authentication formats as the security layer, eliminating the need for yet another password. However, we still may need to store some sort of token identifying the user, adding these two possible elements:
- password - A password or phrase that is unique to each user.
- token - An OAuth or other token issued by 3rd party provider.
That provides us with the basics of each developer API developer account. It really isn’t anything different than a regular account for any online service. Where things start to shift a little specifically for APIs, is that we need some sort of keys for each account that is signing up for API access. The standard approach is to provide some sort of API key, and possibly a secondary secret to compliment it:
- api key - A token that can be passed with each API call.
- api secret - A second token, that can be passed with each API call.
Many API providers just automatically issue these API keys when each account is created, allowing consumers to possibly reset and regenerate at any point. These keys are more about identification than they are about security, as each key is passed along with each API call, and provide identification via API logging, which I’ll cover in a separate post. The only security keys deliver is that API calls are rejected if keys aren’t present.
Allow For Multiple Applications Other API providers will go beyond this base level of API account functionality, allowing API consumers to possibly generate multiple sets of keys, sometimes associated with one or many applications. This opens up the question of whether these keys should be associated with the account, or with one or many registered applications:
- app id - A unique id for the application.
- app name - A name for the application.
- app description - A description of the application.
- api key - A token that can be passed with each API call.
- api secret - A second token, that can be passed with each API call.
Allowing for multiple applications isn’t something every API provider will need, but does reduce the need for API consumers to signup for multiple accounts, when they need an additional API key for a separate application down the road. This is something that API providers should be considering early on, reducing the need to make additional changes down the road when needed. If it isn’t a problem, there is no reason to introduce the complexity into the API management process.
This post doesn’t even touch on logging and usage. It is purely about establishing developer accounts, and providing what they’ll need to authenticate and identify themselves when consuming API resources. Next I’ll explore the usage, consumption, and billing side of the equation. I’m trying to keep things decoupled as much as I possibly can, as not every situation will need every element of API management. It helps me articulate all the moving parts of API management for my readers, and customers, and allows me to help them make sensible decisions, that they can afford.
Ideally, developer accounts are not a separate thing from any other website, web or mobile application accounts. If I had my way, API developer accounts would be baked into the account management tools for all applications by default. However, I do think things should be distilled down, standardized, and kept simple and modular for API providers to consider carefully, and think about deeply when pulling together their API strategy, separate from the rest of their operations. I’m taking the thoughts from this post and applying in one project I’m deploying on AWS, with another that will delivered by custom deploying a solution that leverages Github, and basic Apache web server logging. Keeping the approach standardized, but something I can do with a variety of different services, tools, and platforms.
I’m an old database person. I’ve been working with databases since my first job in 1987. Cobol. FoxPro. SQL Server. MySQL. I have had a production database in my charge accessible via the web since 1998. I understand how databases are the center of gravity when it comes to data. Something that hasn’t changed in an API driven world. This is something that will make microservices in a containerized landscape much harder than some developers will want to admit. The tractor beam of the database will not give up control to data so easily, either because of technical limitations, business constraints, or political gravity.
Databases are all about the storage and access to data. APIs are about access to data. Storage, and the control that surrounds it is what creates the tractor beam. Most of the reasons for control over the storage of data are not looking to do harm. Security. Privacy. Value. Quality. Availability. There are many reasons stewards of data want to control who can access data, and what they can do with it. However, once control over data is established, I find it often morphs and evolves in many ways, that can eventually become harmful to meaningful and beneficial access to data. Which is usually the goal behind doing APIs, but is often seen as a threat to the mission of data stewards, and results in a tractor beam that API related projects will find themselves caught up in, and difficult to ever break free of.
The most obvious representation of this tractor beam is that all data retrieved via an API usually comes from a central database. Also, all data generated or posted via an API, also ends up within a database. The central database always has an appetite for more data, whether scaled horizontally or vertically. Next, it is always difficult to break off subsets of data into separate API-driven project, or prevent newly established ones from being pulled in, and made part of existing database operations. Whether due to technical, business, or political reasons, many projects born outside this tractor beam will eventually be pulled into the orbit of legacy data operations. Keeping projects decoupled will always be difficult when your central databases has so much pull when it comes to how data is stored and accessed. This isn’t just a technical decoupling, this is a cultural one, that will be much more difficult to break from.
Honestly, if your database is over 2-3 years old, and enjoys any amount of complexity, budget scope, and dependency across your organization, I doubt you’ll ever be able to decouple it. I see folks creating these new data lakes, which act as reservoirs for any and all types of data gathered and generated across operations. These lakes provide valuable opportunities for API innovators to potentially develop new and interesting ways of putting data to work, if they possess an API layer. However, I still think the massive data warehouse and database will look to consume and integrated anything structured and meaningful that evolves on the shores. Industrial grade data operations will just industrialize any smaller utilities that emerge along the fringes of large organizations. Power structures have long developed around central data stores, and no amount of decoupling, decentralizing, or blockchaining will change this any time soon. You can see this with the cloud, which was meant to disrupt this, when it just moved it from your data center to the someone else’s, and allowed it to grow at a faster rate.
I feel like us API folks have been granted ODBC and JDBC leases for our API plantations, but rarely will we ever decouple ourselves from the mother ship. No matter what the technology whispers in our ears about what is possible, the business value, and political control over established databases will always dictate what is possible and what is not possible. I feel like this is one reason all the big database platforms have waited so long to provide native API features, and why next generation data streaming solutions rarely have simple, intuitive API layers. I think we will continue to see the tractor beam of database culture continue to be aggressive, as well as passive aggressive to anything API, trumping access possibilities brought to the table by APIs, with outdated power and control beliefs rooted in how we store and control our data. These folks rarely understand they can be just as controlling and greedy with APIs, but they seem to be unable to get over the promises of access APIs afford, and refuse to play along at all, when it comes to turning down the volume on the tractor beam so anything can flourish.
I am adding another building block to my webhooks research out of Github. As I continue this work, it is clear that Gthub will continue to play a significant role in my webhook research and storytelling, because they seem to be the most advanced when it comes to orchestration via API and webhooks. I’m guessing this is a by-product of continuous integration (CI) and continuous deployment (CD), which Github is at the heart of. The API platforms that have embraced automation and orchestration as part of what they do, always have the most advanced webhook implementations, and provide the best examples of webhooks in action, which we can all consider as part of our operations.
Today’s webhook building block is the ping event. “When you create a new webhook, we’ll send you a simple ping event to let you know you’ve set up the webhook correctly. This event isn’t stored so it isn’t retrievable via the Events API. You can trigger a ping again by calling the ping endpoint.” A pretty simple, but handy features when it comes to getting up and going with webhooks, making sure everything is working properly out of the gate–something that clearly comes from experience, and listening to the problems your consumers are encountering.
These types of subtle webhook features are exactly the types of building blocks I’m looking to aggregate as part of my research. As I do with other areas of my research, is at some point I will publish all of these into a single, (hopefully) coherent guide to webhooks. After going through the webhook implementations across the leading providers like Github, I should have a wealth of common patterns in use. Since webhooks aren’t any formal standard, it is yet another aspect of doing business with APIs we have to learn from the health practices already in use across the space. It helps to emulate providers like Github, because developers are pretty familiar with how Github works, when your webhooks behave in similar ways it reduces the cognitive load API consumers face when they are getting started.
One other thing to note in this story–my link to Github’s documentation goes directly to the section on webhook ping events, because they use anchors for all titles and subtitles. This is something that makes storytelling around API operations soooooooooo much easier, and more precise. Please, please, please emulate this in your API operations. If I can directly link to something interesting within your API documentation, the chances are much greater I will tell a story, and publish a blog post about it. If I have to make a user search for whatever I’m talking about, I’m probably just gonna pass on it. One more trick for your toolbox, when it comes to getting me to tell more stories about what you are up to.
As my friend John Sheehan over at Runscope says, “the spreadsheet is the most underrated API client”. The spreadsheet is where a significant amount of business gets done each day in the business world, so it make sense that we should be integrating APIs at this level whenever we possibly can. The best tool for doing this today is with Blockspring, which provides non-developers (and developers) with the tools they need to integrate APIs into spreadsheets, either Microsoft Excel or Google Sheets–putting the power of APIs directly into the hands of average business folk.
Blockspring has over 100 APIs available for integration into your spreadsheets, but I wanted to highlight their recent release of bulk connectors, which currently provides 10 separate data enrichment features from a handful of API providers:
- Bing - Company Domain Lookups
- Bing - Search Query Lookups
- FullContact - Email Lookups
- FullContact - Company Lookups
- FullContact - Twitter Lookups
- FullContact - Phone Lookups
- Mailgun - Validate Emails
- Clarifai - Deep Learning Image Tagging
- Google - Shorten URLs
- Google - Expand URLs
These bulk connectors are meant to help you work with bulk data you have stored in spreadsheets and CSV files by enriching your data using these valuable API services. These connectors are all features I’ve custom developed as part of my internal systems, to help me monitor the world of APIs. They are simple, useful data management features, but instead of having to custom integrate with each API as I had to, anyone can use Blockspring to deliver these features within their own spreadsheets. Making the spreadsheet act as an API client, but for any average business user, not just for developers who are API savvy, and have the skills to deliver custom integration.
I’m a big advocate of companies publishing APIs. I also do a lot of pushing for API providers to make sure their APIs are available to non-developers using Zapier. I consider Blockspring to be on this list of essential API services you should be working with as an API provider. This approach to consuming APIs will increasingly be how business gets done. As much as many of us developers would love for spreadsheets to go away, they ain’t going anywhere. Most normal folk in the world of business live and breathe within their spreadsheets, and if we want to deliver our API services to them, it will have to be through brokers like Blockspring. Just pause for a moment and think about the potential for delivering machine learning models via APIs within the spreadsheet like this, and you’ll begin to realize what an underserved aspect of API consumption spreadsheets are.
I’ve been learning more about AWS API Gateway, and wanted to share some of what I’m learning with my readers. The AWS API Gateway is a robust way to deploy and manage an API on the AWS platform. The concept of an API gateway has been around for years, but the AWS approach reflects the commoditization of API deployment and management, making it a worthwhile cloud API service to understand in more depth. With the acquisition or all the original API management providers in recent years, as well as Apigee’s IPO, API management is now a default element of major cloud providers. Since AWS is the leading cloud provider, AWS API Gateway will play a significant role into the deployment and management of a growing number of APIs we see.
Using AWS API Gateway you can deploy a new API, or you can use it to manage an existing API–demonstrating the power of a gateway. What really makes AWS API Gateway reflect where things are going in the space, is the ability to import and define your API using OpenAPI. When you first begin with the new API wizard, you can upload or copy / paste your OpenAPI, defining the surface area of the API, no matter how you are wiring up the backend. OpenAPI is primarily associated with publishing API documentation because of the success of Swagger UI, and secondarily associated with generating SDKs and code samples. However, increasingly the OpenAPI specification is also being used to deploy and define aspects of API management, which is in alignment with the AWS API Gateway approach.
I have server side code that will take an OpenAPI and generate the server side code needed to work with the database, and handle requests and responses using the Slim API framework. I’ll keep doing this for many of my APIs, but for some of them I’m going to be adopting an AWS API Gateway approach to help standardize API deployment and management across the APIs I deliver. I have multiple clients right now who I am deploying, and helping them manage their API operations using AWS, so adopting AWS API Gateway makes sense. One client is already operating using AWS which dictated that I keep things on AWS, but the other client is brand new, looking for a host, which also makes AWS a good option for setting up, and then passing over control of their account and operations to their on-site manager.
Both of my current projects are using OpenAPI as the central definition for all stops along the API lifecycle. One API was new, and the other is about proxying an existing API. Importing the central OpenAPI definition for each project to AWS API Gateway worked well to get both projects going. Next, I am exploring the staging features of AWS gateway, and the ability to overwrite or merge the next iteration of the OpenAPI with an existing API, allowing me to evolve each API forward in coming months, as the central definition gets added to. Historically, I haven’t been a fan of API gateways, but I have to admit that the AWS API Gateway is somewhat changing my tune. Keep the service doing one thing, and doing it well, with OpenAPI as the definition, really fits with my definition of where API management is headed as the API sector matures.
I believe in the potential of what APIs can do, and care about learning how we can do things right. Part of it is my job, but part of it is me wanting to do things well. Master my approach to delivering APIs, using my well-rounded API toolbox. Reading the approach of other leading API providers, and honing my understanding of healthy, and not so healthy practices. I thoroughly enjoy studying what is going on and then applying it across what I do. However I am reminded regularly that most people are not interested in knowing, and doing things right–they just want things done.
As many of us discuss the finer details of API design, and the benefits of one approach over the other, other folks would rather us just point them to the solution that will work for them. They really don’t care about the details, or mastering the approach, they just want it to work in their situation. Whether it is the individual, the project or organization they are working in, the environment is just not conducive to learning, understanding, and growth. They are just interested in services and tools that can deliver the desired solution for as cheap as possible–free, and open source whenever available.
You can see this reality playing out across the space. An example is OpenAPI (fka Swagger). It has largely been successful because of Swagger UI. Most people think OpenAPI is all about documentation, with their understanding reflecting the solution they delivered, not the full benefits brought to the table as part of the process of implementing the specification. This is just one example of how folks across the API space are interested in solutions, rather than the journey. This is why many API programs will stagnate and fail, because folks do them thinking they’ll achieve some easier way of doing things, easy integrations, effortless innovation, or some other myth around API Valhalla.
I feel like much of this reality is set into motion through our education system. We don’t always teach people to learn, we tend to teach them job skills that we (companies) perceive are relevant today. Technology training tends to become about software solutions, brands, and platforms, and rarely concepts. Then I feel like this is baked into company operations because of the escalated pace introduced by startup funding, all the way up to the constant quest for profits of publicly traded companies. An incentive model that encourages folks to just do, not necessary do right. Achieve short term goals, quarterly and annual metrics, and ignore the technical debt we created–that will be someone elses problem.
Helping developers learn about APIs feels like our healthcare system sometimes to me. Developers are focused on fixing problems, treating symptoms, and living with good enough outcomes. Nothing preventative. No healthy diet or exercise. Just solving the problems that are in front of us. Looking for solutions to these immediate needs. I need API documentation. I need some sort of automated testing, or security scanner. I just need that API to be performant. I’m not interested in learning about the web, standards, or other things that would help me in my work. I just want this to work, so I can move on to the next problem. It’s a pretty big problem in the API space which I really don’t have any solution for. I’ll just keep learning, educating, and sharing with folks, but regularly reminding myself that not everyone will care about this stuff as much as I do, and that is just the way things are.
API design is something that many have tried to quantify and measure, but very few ever establish any meaningful way of doing so properly in my experience. I’ve been learning about the approach to API governance from the Capital One DevExchange team, and found their approach to defining API design maturity pretty interesting. I’m mostly interested in their approach because it speaks to actual business objectives, and aren’t about the common technical aspects we see API design being quantified across the community each day.
Capital One breaks things down into five distinctive layers that offer value to any organization doing APIs. Starting at the bottom of their maturity period, here are the levels of maturity they are measuring things by:
- Functional - Doing the basics, providing some low-level functionality, and nothing more.
- Reliable - An API that is reliable, and scalable, beyond just basic functionality.
- Intuitive - Where an investment in developer experience is made, further standardizing and streamlining what an API does.
- Empowering - Where an API really begins to deliver value to an organization by being function, reliable, and intuitive, which all contributes significantly to operations.
- Transformative - APIs that are game changer. Few APIs ever rise to this level, but all should aspire to this level of maturity.
It provides a whole other lens for looking at API design through. Moving beyond just, is it RESTful? Hypermedia, GraphQL, gRPC, or other emerging approach. It also understands that not all APIs will be equal, and that we should be standardizing the value the design of our APIs deliver to our business operations. Forcing us to ask some pretty simple questions about the API design patterns we are using, and the actual value they bring to the table. Moving beyond API design being about technical details, and considering the business, and other political aspects of doing APIs.
The Capital One API design maturity definition also demonstrates another important aspect of all of this. That it takes work, time, and experience to properly design an API that will rise an empowering or transformative level. It can be easy to deliver functional APIs, but making them become reliable, and intuitively get the job done will take investment. I’m adding this definition to my API design research, as well as my upcoming API governance research. I’m looking to evolve my definition of what API design looks like beyond just REST, and I’m wanting to move API governance beyond some IT led concept, and something that is in sync with business objectives, like I’m seeing from the Capital One team. It is easy to think of API design as mature once it enters production, but I’m thinking there is a lot more we should be considering before we consider our approach to API design mature.
APIs are not forever, and eventually will go away. The trick with API deprecation is to communicate clearly, and regularly with API consumers, making sure they are prepared for the future. I’ve been tracking on the healthy, and not so healthy practices when it comes to API deprecation for some time now, but felt like Google had some more examples I wanted to add to our toolbox. Their approach to setting expectations around API deprecation is worthy of emulating, and making common practice across industries.
The Google Adwords API team is changing their release schedule, which in turns impacts the number of APIs they’ll support, and how quickly they will be deprecating their APIs. They will be releasing new versions of the API three times a year, in February, June and September. They will also be only supporting two releases concurrently at all times, and three releases for a brief period of four weeks, pushing the pace of API deprecation alongside each release. I think that Google’s approach provides a nice blueprint that other API provides might consider adopting.
Adopting an API release and sunset schedule helps communicate the changes on the horizon, but it also provides a regular rhythm that API consumers can learn to depend on. You just know that there will be three releases a year, and you have a quantified amount of time to invest in evolving integration before any API is deprecated. It’s not just the communication around the roadmap, it is about establishing the schedule, and establishing an API release and sunset cadence that API consumers can be in sync with. Something that can go a lot further than just publishing a road map, and tweeting things out.
I’ll add this example to my API deprecation research. Unfortunately the topic is one that is widely communicated around in the API space, but Google has long a strong player when it comes to finding healthy API deprecation examples to follow. I’m hoping to get to the point soon where I can publish a simple guide to API deprecation. Something API providers can follow when they are defining and deploying their APIs, and establish a regular API release and deprecation approach that API developers can depend on. It can be easy to get excited about launching a new API, and forget all about it’s release and deprecation cycles, so a little guidance goes a long way to helping API providers think about the bigger picture.
Operating on Github is natural for me, but I am regularly reminded what a foreign concept it is for some of the API providers I’m speaking with. Github is the cheapest, easiest way to launch a public or private developer portal for your API. With the introduction of Github Pages, each Github repository is turned into a place to host any API related project. In my opinion, every API should begin with Github, providing a place to put your API definition, portal, and other elements of your API operations.
If you are just getting going with understand how Github can be used to support your API operations, I wanted to provide a simple checklist of the concepts at play, that will lead you being able to publish your API portal to Github.
- Github Account - You will need an account to be able to use Github. Anything you do on Github that is public will be free. You can do private portals on Github, but this story is about using it for a public API portal.
- Github Organization - I recommend starting an organization for your API operations, instead of under just a single users account. Then you can make the definition for the API the first repository, and possibly the portal your second repository you create.
- Github Repo - A Github repository is basically a folder on the platform which you can start the code, pages, and other content used as part of API operations.
- Github Pages - Each Github repository has the ability to turn on a public project site, which can be used as a hosting location for a developer portal.
- Jekyll - Github Pages allows any Github repository to become a website hosting location which you can access via your Github user account, or even provide an address using your own domain.
I recommend every API provider think about hosting their API portal on Github. The learning curve isn’t that significant to get up and running, and if your portal is public, it is free. You can version control, and leverage other key aspects of Github for evolving and managing your API portal. There are a growing number of examples of forkable API portals like from the GSA, or an interesting minimum viable API documentation template from my friend James Higginbotham (@launchany). Demonstrating that the practice is growing, with the number of healthy examples to build on diversifying.
If you need help understanding how to use Github for hosting your API developer portal, feel free to reach out. I am happy to see where I can help. Another thing to note is that this approach to running a Jekyll static website isn’t limited to Github. You can always start the project there, and move off to any Jekyll enabled hosting provider. I run my entire network of websites and API project this way, leveraging Github as my plan A, and AWS as my plan B, with a server image ready to go when I need. Github just provides a number of bells and whistles that make it much more usable, as well as something others can collaborate around, enjoying the network effects that come with using the platform.
I am developing a basic API management strategy for one of my client’s API. With each area of their API strategy I am taking what I’ve learned monitoring the API sector, but pausing for a moment to think about again, and then applying to their operations. Over the years I have separated out many aspects of API management, distilling it down to a core set of elements that reflect the evolution of API management as its evolved into a digital commodity. It helps me to think through these aspects of API operations in general, but also applying to a specific API I am working on, helping me further refine my API strategy advice.
API management is the oldest area of my research. It has spawned every other area of the lifecycle I track on, but also is the most mature aspect of the API economy. This project I am working on gives me an opportunity to think about what is API management, and what should be spun off into separate areas of concern. I am looking to distill API management down to:
- Portal - A single URL to find out everything about an API, and get up and running working the resources that are available.
- On-Boarding - Think just about how you get a new developer to from landing on the home page of the portal to making their first API call, and then an application in production.
- Accounts - Allowing API consumers to sign up for an account, either for individual, or business access to API resources.
- Applications - Enable each account holder to register one or many applications which will be putting API resources to use.
- Authentication - Providing one, or multiple ways for API consumers to authenticate and get access to API resources.
- Services - Defining which services are available across one or many API paths providing HTTP access to a variety of business services.
- Logging - Every call to the API is logged via the API management layer, as well as the DNS, web server, file system, and database levels.
- Analysis - Understanding how APIs are being consumed, and how applications are putting API resources to use, identifying patterns across all API consumption.
- Usage - Quantifying usage across all accounts, and their applications, then reporting, billing, and reconciling usage with all API consumers.
- APIs - API access to accounts, authentication, services, logging, analysis, and usage of API resources.
There are other common elements bundled with API management, but this reflects the core of what API management is about–the business of APIs. Keeping track of who has access to what, and how much they are using. There are a number of other aspects of API management that many will consider under the API management umbrella, but I’ve elevated to being their own stops along the API lifecycle. Some areas are:
- Documentation - Static or interactive documentation for all available API paths, parameters, headers, and other details of the request and response surface area of the API.
- Support - Self-service, or direct support channels that API consumers put to use to get help along the way.
- SDKs - The SDKs, samples, libraries, and other supporting code elements for web, mobile, or other types of applications.
- Road Map - Communicating what the future holds when it comes to the API.
- Issues - Notification of any open issues with the available of the API.
- Change Log - A history of what has happened when it comes to changes to the API.
These areas compliment API management, but should be approached beyond the day to day management aspects of API operations. I’d also consider authentication, logging, and analysis to be bigger than just about API management, as all three areas should cover more than just the API, but they are still very coupled to the core aspects of API management. In my definition, API management is very much about managing the consumption of resources, and not always the other aspects of API operations. This isn’t just my definition, it is what I’m seeing with the commodization of API management like we see over at Amazon Web Services.
AWS API Gateway is really about accounts, applications, authentication, and services. The logging, analysis are delivered by AWS CloudWatch. For this particular project I’m using Github and Jekyll as the portal, and custom delivering on-boarding, usage, and supporting APIs separately. Further narrowing down my definition of just what is API management. I’d say that AWS represents this evolution in API management well, with the decoupling of concerns between AWS API Gateway and AWS CloudWatch. If you apply AWS Cognito to authentication, you can separate out another. I do not see any viable solution to handling usage, billing, and what used to be the business and monetization side of API management.
This API project I am working on is already using AWS for the backend of their operations, so I’m investing cycles into better understanding the moving parts of API management in context of the AWS platform. It makes sense to think some more about the decoupling of API management in context of AWS, since they are a major player in the commodization, and maturing of the concept. Once I’m done with AWS, I’m going to take another look at Google, then Azure, who are the other major players who are defining the future of API management.
Around 2010, the world of APIs began picking up speed with the introduction of the iPhone, and then Android mobile platforms. Web APIs had been used for delivering data and content to websites for almost a decade at that point, but their potential for delivering resources to mobile phones is what pushed APIs into the spotlight. The API management providers pushed the notion of being multi-channel, and being able to deliver to web and mobile clients, using a common stack of APIs. Seven years later, web and mobile are still the dominant clients for API resources, but we are seeing a next generation of clients begin to get more traction, which includes voice, bot, and other conversational interfaces.
If you deliver data and content to your customers via your website and mobile applications, the chance that you will also be delivering it to conversational interfaces, and the bots and assistants emerging via Alexa and Google Home, as well as on Slack, Facebook, Twitter, and other messaging platforms, is increasing. I’m not selling that everything will be done with virtual assistants, and voice commands in the near future, but as a client we will continue to see mainstream user adoption, and voice be used in automobiles, and other Internet connected devices emerging in our world. I am not a big fan of talking to devices, but I know many people who are.
I don’t think Siri, Alexa, and Google Home will live up to the hype, but there is enough resources being invested into these platforms, and the devices that they are enabling, that some of it will stick. In the cracks, interesting things will happen, and some conversational interfaces will evolve and become useful. In other cases, as a consumer, you won’t be able to avoid the conversational interfaces, and be required to engage with bots, and use voice enabled devices. This will push the need to have conversationally literate APIs that can deliver data to people in bite-size chunks. Sensors, cameras, drones, and other Internet-connected devices will increasingly be using APIs to do what they do, but voice, and other types of conversational interfaces will continue to evolve to become a common API client.
I am hoping at this point we begin to stop counting the different channels we deliver API data and content to. Despite many of the Alexa skills, and Slack bots I encounter being pretty yawn-worthy, I’m still keeping an eye on how APIs are being used by these platforms. Even if I don’t agree with all the uses of APIs, I still find the technical, business, and politics beyond them evolving worth tuning into. I tend to not emphasize to my clients that they work on voice or bot applications if they aren’t too far along their API journey, but I do make sure they understand one of the reasons they are doing APIs is to support a wide and evolving range of clients, and that at some point they’ll have to begin studying how voice, bots, and other conversational approaches will be a client they have to consider a little more in their overall strategy.
You don’t find me showcasing specific APIs often. I’m usually talking about an API because of their approach to the technology, business, or politics of how they do APIs. It just isn’t my style to highlight APIs, unless I think they are interesting, and delivering value that is worth talking about, or possibly reflecting a meaningful trend that is going on. In this case it is a useful API that I think brings value, but also provides an example of an API I can showcase to non-developer folks as a meaningful example of an API.
The API I’m talking about today, is the Air API, an asthma API from Propeller, which provides a set of free tools to help people understand current asthma conditions in their neighborhoods. The project is led by the Propeller data scientists and clinical researchers, looking to leverage Air API to help predict how asthma may be affected by local conditions, including a series of tools that share local asthma conditions, ranging from an email or text subscription, to an embeddable Air Widget for other websites.
The Air API provides an easy to explain example of what is possible with APIs. Environmental APIs will continue to be an important aspect of doing APIs. Aggregating sensor and other data to help us understand the air, water, weather, and other critical environmental factors that impact our lives each day. I like the idea of these APIs being open and available to 3rd party developers to build tools on top of them, while the platforms using them as a marketing vehicle for their other products and services, while making sure to keep the valuable data accessible to everyone.
I’ll put the Air API into my toolbox of APIs I use to help onboard folks with APIs. If they are impacted by asthma, or know someone who is, it helps make the personal connection, which can be important when on-boarding folks with the abstract concepts surrounding APIs. People tend to not care about technology until it makes an impact on their world. Which is one reason I think healthcare and environment APIs are going to play an important role in the sector for years to come. They provide a rich world of data, content, and algorithms, that can be exposed via APIs, and be applied in meaningful ways in people’s lives. Leaving a (hopefully) positive impression on folks about what APIs can do.
It is always funny how long some concepts take to fully capture my attention. Sometimes I understand a concept on the surface, but never really invest the time into thinking deeply about how it actually fits into the big picture of my API research. One of these concepts is “headless”. Most commonly applied to the “headless CMS”. The wikipedia entry for headless CMS proclaims, “a Headless CMS is a back-end only content management system (CMS) built from the ground up as a content repository that makes content accessible via a RESTful API for display on any device.”
A headless CMS is basically API-first, but instead of API being the first focus, the entire CMS, and administrative system for managing the content gets top billing, with API playing a critical supporting role. It’s what I am always prescribing as a way for API providers to consider the role between their applications, and their backend API resources. Decoupling your apps, from the backend resources. The administration interface for the content management system is one application, and each of your web, mobile, or other applications act as independent solutions. I see headless as just a business view of doing APIs, which makes sense when selling the concept to normals.
I’m adding a research area for headless that augments my API deployment research. Some of the open source implementations I’ve come across like Directus are pretty slick. It’s a pretty quick way to go from API to something that immediately benefits the average user in as short of time as possible. Headless is just another way to frame the the API conversation in the context of delivering an internal content management system, over getting 3rd party developers to build applications on top. It is something that is still possible because their is an API behind, but it is not the primary objective in any headless implementation.
While the main definition of headless is about having an API behind everything, I am also finding examples where Github is the backend, instead of just a RESTful API. Prose.io which is actually the CMS I use for API Evangelist is considered a headless CMS, but Github acts my backend. Some of the headless CMS solutions I’ve used that depend on Github use Git as the connection, while others depend on the Github API for reading and writing data and content. This reflects how I use Github across my projects, beyond just Prose.io, except I am depending on web APIs, Github, as well as Google Sheets for the backend of Jekyll driven websites and applications.
I have been tracking on headless CMS stories for almost two years now, but after diving a little deeper this last week, it finally clicked for me. I feel headless is a good way to help the world move past WordPress, and embrace a more decoupled way of delivering websites, mobile, and other types of applications. I’m going to deploy Directus so that I can better understand headless as an approach to deploying APIs, and see about deploying basic demo implementations that I can point to. I’m hoping to use it as a way to introduce folks to APIs where public APIs is not the first objective, allowing them to get their feet wet with a simple API and website implementation. Helping deliver value without all the immediate risk, allowing folks to learn about how APIs can drive one or many applications in a more controlled internal environment.
I’m dialing in a set of machine learning APIs that I use to obfuscate and distort the images I use across my storytelling. The code is getting a little more hardened, but there is still so much work ahead when it comes to making sure it does exactly what I needed it to do, with only dials and controls I need–nothing more. I’m the only consumer of my APIs, which I use them daily, with updates made to the code along the way, evolving the request and response structures until they meet my needs. Eventually the APIs will be done (enough), and I’ll stop messing with them, but that will take a couple months more of pushing forward the code.
While the code for these APIs are far from finished, I find the API helps obfuscate and diffuse the unfinished nature of things. The API maintains a single set of paths, and I might still evolve the number of parameters it accepts, and the fields it outputs, the overall will keep a pretty tight surface area despite the perpetually unfinished backend. I like this aspect of operating APIs, and how they can be used as a facade, allowing you to maintain one narrative on the front-end, even with another occurring behind behind the scenes. I feel like API facades really fit with my style of coding. I’m not really an A grade programmer, more a B- level one, but I know how to get things to work–if I have a polished facade, things look good.
Honestly, I’m pretty embarrassed by my wrappers for TensorFlow. I’m still figuring everything out, and finding new ways of manipulating and applying Tensor Flow models, so my code is rapidly changing, maturing, and not always complete. When it comes to the API interface I am focused on not introducing breaking changes, and maintaining a coherent request and response structure for the image manipulation APIs. Even though the code will at some point be stabilized, with updates becoming less frequent, the code will probably not see the light of day on Github, but the API might eventually be shared beyond just me, providing a simple, usable interface someone else can use.
I wouldn’t develop APIs like this outside a controlled group of consumers, but I find being both API provider and consumer pushes me to walk a line of unfinished and evolving backend, with a stable, simple API, without any major breaking changes. The fewer fixes I have to do to my API clients, the more successful I’ve been pushing forward the backend, while keeping the API in forward motion without the friction. I guess I just feel like the API makes for a nice wrapper for code, acting as shock absorbers for ups and downs of evolving code, and getting it to do what you need. Providing a nice facade that obfuscates the evolving code behind my API.
I was doing some API security research and stumbled across vFeed, a “Correlated Vulnerability and Threat Intelligence Database Wrapper”, providing a JSON API of vulnerabilities from the vFeed database. The approach is a Python API, and not a web API, but I think provides an interesting blueprint for open source APIs. What I found interesting (somewhat) from the vFeed approach was the fact they provide an open source API, and database, but if you want a production version of the database with all the threat intelligence you have to pay for it.
I would say their technical and business approach needs a significant amount of work, but I think there is a workable version of it in there. First, I would create a Python, PHP, Node.js, Java, Go, Ruby version of the API, making sure it is a web API. Next, remove the production restriction on the database, allowing anyone to deploy a working edition, just minus all the threat data. There is a lot of value in there being an open source set of threat intelligence sharing databases and API. Then after that, get smarter about having a variety different free and paid data subscriptions, not just a single database–leverage the API presence.
You could also get smarter about how the database and API enables companies to share their threat data, plugging it into a larger network, making some of it free, and some of it paid–with revenue share all around. There should be a suite of open source threat information sharing databases and APIs, and a federated network of API implementations. Complete with a wealth of open data for folks to tap into and learn from, but also with some revenue generating opportunities throughout the long tail, helping companies fund aspects of their API security operations. Budget shortfalls are a big contributor to security incidents, and some revenue generating activity would be positive.
So, not a perfect model, but enough food for thought to warrant a half-assed blog post like this. Smells like an opportunity for someone out there. Threat information sharing is just one dimension of my API security research where I’m looking to evolve the narrative around how APIs can contribute to security in general. However, there is also an opportunity for enabling the sharing of API related security information, using APIs. Maybe also generating of revenue along the way, helping feed the development of tooling like this, maybe funding individual implementations and threat information nodes, or possibly even fund more storytelling around the concept of API security as well. ;-)
I study the API universe every day of the week, looking for common patterns in the way people are using technology. I study almost 100 stops along the API lifecycle, looking for healthy practices that companies, organizations, institutions, and government agencies can follow when dialing in their API operations. Along the way I am also looking for patterns that aren’t so healthy, which are contributing to many of the problems we see across the API sector, but more importantly the applications and devices that they are delivering valuable data, content, media, and algorithms to.
One layer of my research is centered around studying API security, which also includes keeping up with vulnerabilities and breaches. I also pay attention to cybersecurity, which is a more theatrical version of regular security, with more drama, hype, and storytelling. I’ve been reading everything I can on the Equifax, Accenture, and other scary breaches, and like the other areas of the industry I track on, I’m beginning to see some common patterns emerge. It is something that starts with the way we use (or don’t use) technology, but then is significantly amplified by the human side of things.
There are a number of common patterns that contribute to these breaches on the technical side, such as not enough monitoring, logging, and redundancy in security practices. However, there are also many common patterns emerging from the business approach by leadership during security incidents, and breaches. These companies security practices are questionable, but I’d say the thing that is the most unacceptable about all of these is the communication around these security events. I feel like they demonstrate just how dysfunctional things are behind the scenes at these companies, but also demonstrate their complete lack of respect and concern for individuals who are impacted by these incidents.
I am pretty shocked by seeing how little some companies are investing in API security. The lack of conversation from API providers about their security practices, or lack of, demonstrates how much work we still have to do in the API space. It is something that leaves me concerned, but willing to work with folks to help find the best path forward. However, when I see companies do all of this, and then do not tell people for months, or years after a security breach, and obfuscate, and bungle the response to an incident, I find it difficult to muster up any compassion for the situations these companies have put themselves in. Their security practices are questionable, but their communication around security breaches is unacceptable.
I wrote about Tyk’s API surgery meetups last week, and adding a new approach to our API event and workshop toolbox, and next I wanted to highlight the gRPC Meetup Kit, a resource for creating your own gRPC event. gRPC is an approach out of Google for designing, delivering, and operating high performance APIs. If you look at the latest wave of APIs out of Google you’ll see they are all REST and/or gRPC. Most of them are dual speed, providing both REST and gRPC. gRPC is an open source initiative, but very much a Google led effort that we’ve seen picking up momentum in 2017.
While I am keeping an eye on gRPC itself, this particular story is about the concept of providing a Meetup kit for your API related service or tooling, providing an “In a Box” solution that anyone can use to hold a Meetup. The gRPC team provides three groups of resources:
gRPC 101 Presentation
- Talk - A 15 minute course introduction video.
- Slides - Slides that go along with the talk.
- Codelab - A 45m codelab that attendees can complete using Cloud Shell.
Resources and community
- gRPC Website
- GitHub Source
- Ask Questions
- Keep in Touch
Request Support for Your Event
It provides a nice blueprint for what is needed when crafting your own Meetup Kit as well as some material you could weave into any other type of Meetup, or workshop that might contain gRPC. Maybe an API design and protocol workshop, where you cover all of the existing approaches out there today like REST, Hypermedia, gRPC, GraphQL, and others. If nothing else the gRPC Meetup Kit provides a nice forkable project, that you could use as scaffolding for your own kit.
As I mentioned in my piece about Tyk, I don’t think Meetups are going anywhere as a tool for API providers, and API service providers to reach a developer audience. However, I think we are going to have to get more creative about how we organize, produce, and incentivize others to put them on. They are a great vehicle for brining together folks in a community to learn about technology, but we have to make sure they are delivering value for people who show up. I am guessing that a little planning, and evolving a toolkit using Github is a good way to approach putting on Meetups, workshops, and other small events around your products, services, and tooling.
API evangelism and even advocacy at many organizations has always been a challenge to introduce, because many groups aren’t really well versed in the discipline, and often times it tends to take on a more marketing or even sales like approach, which can hurt its impact. I’ve worked with groups to rebrand, and change how they evangelize APIs internally, with partners, and the public, trying to ensure the efforts are more effective. While I still bundle all of this under my API evangelism research, I am always looking for new approaches that push the boundaries, and evolve what we know as API evangelism, advocacy, outreach, and other variations.
I was introduced to a new variation of the internal API evangelism concept a few weeks back while at Capital One talking with my friend Matthew Reinbold(@libel_vox) about their approach to API governance. His team at the Capital One API Center of Excellence has the concept of the API coach, and I think Matt’s description from his recent API governance blueprint story sums it up well:
At minimum, the standards must be a journey, not a destination. A key component to “selective standardization” is knowing what to select. It is one thing for us in our ivory tower to throw darts at market forces and team needs. It is entirely another to repeatedly engage with those doing the work.
Our coaching effort identifies those passionate practitioners throughout our lines of business who have raised their hands and said, “getting this right is important to my teams and me”. Coaches not only receive additional training that they then apply to their teams. They also earn access to evolving our standards.
In this way, standards aren’t something that are dictated to teams. Teams drive the standards. These aren’t alien requirements from another planet. They see their own needs and concerns reflected back at them. That is an incredibly powerful motivator toward acceptance and buy-in.
A significant difference here between internal API evangelism and API coaching is you aren’t just pushing the concept of APIs (evangelizing), you are going the extra mile to focus on healthy practices, standards, and API governance. Evangelism is often seen as an API provider to API consumer effort, which doesn’t always translate to API governance internally across organizations who are developing, deploying, and managing APIs. API coaches aren’t just developing API awareness across organizations, they are cultivating a standardized, bottom up, as well as top down awareness around providing and consuming APIs. Providing a much more advanced look at what is needed across larger organizations, when it comes to outreach and communication.
Another interesting aspect of Capital One’s approach to API coaching, is that this isn’t just about top down governance, it has a bottom up, team-centered, and very organic approach to API governance. It is about standardizing, and evolving culture across many organizations, but in a way that allows team to have a voice, and not just be mandated what the rules are, and required to comply. The absence of this type of mindset is the biggest contributor to a lack of API governance we see across the API community today. The is what I consider the politics of APIs, something that often trumps the technology of all of this.
API coaching augments my API evangelism research in a new and interesting way. It also dovetails with my API design research, as well as begins rounding off a new area I’ve wanted to add for some time, but just have not see enough activity in to warrant doing so–API governance. I’m not a big fan of the top down governance that was given to us by our SOA grandfathers, and the API space has largely been doing alright without the presence of API governance, but I feel like it is approaching the phase where a lack of governance will begin to do more harm than good. It’s a drum I will start beating, with the help of Matt and his teams work at Capital One. I’m going to reach out to some of the other folks I’ve talked with about API governance in the past, and see if I can produce enough research to get the ball rolling.
I’m regularly looking through API providers, service providers, and open data platforms looking for interesting ways in which folks are exposing APIs. I have written about Kentik exposing the API call behind each dashboard visualization for their networking solution, as well as CloudFlare providing an API link for each DNS tool available via their platform. All demonstrating healthy way we can show how APIs are right behind everything we do, and today’s example of how to provide API access is out of New York Open Data, providing access to 311 service requests made available via the Socrata platform.
The page I’m showcasing provides access 311 service requests from 2010 to present, with all the columns and meta data for the dataset, complete with a handy navigation toolbar that lets you view data in Carto or Plot.ly, download the full dataset, access via API, or simply share via Twitter, Facebook, or email. It is a pretty simple example of offering up multiple paths for data consumers to get what they want from a dataset. Not everyone is going to want the API. Depending on who you are you might go straight for the download, or opt to access via one of the visualization and charting tools. Depending on who you are targeting with your data, the list of tools might vary, but the NYC OpenData example via Socrata provides a nice example to build upon. With the most important message being do not provide only the options you would choose–get to know your consumers, and deliver solutions they will also need.
It provides a different approach to making APIs behind available to users than the Kentik or CloudFlare approaches do, but it adds to the number of examples I have to show people how APIs and API enabled integration can be exposed through the UI, helping educate the massess about what is possible. I could see standardized buttons, drop downs, and other embeddable tooling emerge for helping deliver solutions like this for providers. Something like we are seeing with the serverless webhooks out Auth0 Extensions. Some sort of API-enabled goodness that triggers something, and can be easily embedded directly into any existing web or mobile application, or possibly a browser toolbar–opening up API enabled solutions to the average user.
One of the reasons I keep showcasing examples like this is that I want to keep pushing back on the notion that APIs are just for developers. Simple, useful, and relevant APIs are not beyond what the average web application user can grasp. They should be present behind every action, visualization, and dataset made available online. When you provide useful integration and interoperability examples that make sense to the average user, and give them easy to engage buttons, drop downs, and workflows for implementing, more folks will experience the API potential in their world. The reasons us developers and IT folk keep things complex, and outside the realm of the normal folk is more about us, our power plays, as well as our inability to simplify things so that they are accessible beyond those in the club.
Monitoring your API availability should be standard practice for internal and external APIs. If you have the resources to custom build API monitoring, testing, and performance infrastructure, I am guessing you already have some pretty cool stuff in place. If you don’t, then you should not be reinventing the wheel out there, and you should be leveraging one of the existing API monitoring services out there on the market. When you are getting started with monitoring your APIs I recommend you begin with uptime and downtime, and once you deliver successfully on that front, I recommend you work on API performance, and the responsiveness of your APIs.
You should begin with making sure you are delivering the service level agreement you have in place with your API consumers. What, you don’t have a service level agreement? No better time to start than now. If you don’t already have an explicitly stated SLA in place, I recommend creating one internally, and see what you can do to live up to your API SLA, then once you ensure things are operating at acceptable levels, you share with your API consumers. I am guessing they will be pretty pleased to hear that you are taking the initiative to offer an SLA, and are committed enough to your API to work towards such a high bar for API operations.
To help you manage defining, and then ultimately monitoring and living up to your API SLA, I recommend taking a look at APIMetrics, who is obsessively focused on API quality, performance, and reliability. They spend a lot of time monitoring public APIs, and have developed a pretty sophisticated approach to ranking and scoring your API to ensure you meet your SLA. As you can see in the picture for this story, the APIMetrics administrative dashboard provides a pretty robust way for you to measure any API you want, and establish metrics and triggers that let you know if you’ve met or failed to meet your SLA requirements. As I said, you could start out by monitoring internally if you are nervous about the results, but once you are ready to go prime time you have the tools to help you regularly reporting internally, as well as externally to your API consumers.
I wish that every stop along the life cycle had a common definition for defining a specific aspect of service level agreements, and was something that multiple API providers could measure and report upon similar to what APIMetrics does for monitoring and performance. I’d like to see API design begin to have a baseline definition, that was verifiable through a common set of machine readable API assertions. I’d love for API plans, pricing, and even terms of service measurable, reportable, in a similar way. These are all things that should be observable through existing outputs, and reflected as part of service level agreements. I’d love to see the concept of the SLA evolve to cover all aspects of the quality of service beyond just availability. APIMetrics provides a good look at how the services we use to manage our APIs can be used to define the level of service we provide, something that we can be emulating more across our API operations.
I’ve been doing a lot of thinking about algorithmic transparency, as well as a more evolved version of it I’ve labeled as algorithmic observability. Many algorithmic developers feel their algorithms should remain black boxes, usually due to intellectual property concerns, but in reality the reasons will vary. My stance is that algorithms should be open source, or at the very least have some mechanisms for auditing, assessing, and verifying that algorithms are doing what they promise, and that algorithms aren’t doing harm behind the scenes.
This is a concept I know algorithm owners and creators will resist, but algorithms observability should work like food labels, but work in a more machine readable way, allowing them to be validated by other external (or internal) systems. Similar to food you buy in the store, you shouldn’t have to give away the whole recipe and secret sauce behind your algorithm, but there should be all the relevant data points, inputs, outputs, and other “ingredients” or “nutrients” that go into the resulting algorithm. I talked about algorithm attribution before, and I think there should be some sort of algorithmic observability manifest, which provides the “label” for an algorithm in a machine readable format. It should give all the relevant sources, attribution, as well as input and outputs for an algorithm–with different schema for different industries.
In addition to there being an algorithmic observability “label” available for all algorithms, there should be live, or at least virtualized, sandboxed instances of the algorithm for verification, and auditing of what is provided on the label. As we saw with the Volkswagen emissions scandal, algorithm owners could cheat, but it would provide an important next step for helping us understand the performance, or lack of performance when it comes to the algorithms we are depending on. Why I call this algorithmic observability, instead of algorithmic transparency, is each algorithm should be observable using it’s existing inputs and outputs (API), and not just be a “window” you can look through. It should be machine readable, and audit-able by other systems in real time, and at scale. Going beyond just being able to see into the black box, but also be able to assess, and audit what is occurring in real time.
Algorithmic observability regulations would work similar to what we see with food and drugs, where if you make claims about your algorithms, they should have to stand up to scrutiny. Meaning there should be standardized algorithmic observability controls for government regulators, industry analysts, and even the media to step up and assess whether or not an algorithm lives up to the hype, or is doing some shady things behind the scenes. Ideally this would be something that technology companies would do on their own, but based upon my current understanding of the landscape, I’m guessing that is highly unlikely, and will be something that has to get mandated by government in a variety of critical industries incrementally. If algorithms are going to impacting our financial markets, elections, legal systems, education, healthcare, and other essential industries, we are going to have to begin the hard work of establishing some sort of framework to ensure they are doing what is being sold, and not hurting or exploiting people along the way.
Creating regular content for your blog is essential to maintaining a presence. If you don’t publish regularly, and refresh your content, you will find your SEO, and wider presence quickly becoming irrelevant. I understand that unlike me, many of you have jobs, and responsibilities when it comes to operating your APIs, and carving out the time to craft regular blog posts can be difficult. To help you in your storytelling journey I am always looking for other stories to help alleviate your pain, while helping keep your blog active, and ensure folks will continue stumbling across your API, or API service, while Google, or on social media.
Another interesting example of how to keep your blog fresh came from my partners over at Runscope, who conducted a featured guest blog post series, where they were paying API community leaders to help “create an incredible resource of blog posts about APIs, microservices, DevOps, and QA.” Which has produced a handful of interesting posts:
- Monolith to Microservices: Transforming a web-scale, real-world e-commerce platform using the Strangler Pattern
- You Might Not Need GraphQL
- 3 Easy Steps to Cloud Operational Excellence
- Building a Steam Powered IoT API with Thingsboard
One thing to note is that Runscope paid $500.00 per post to help raise the bar when it comes to the type of author that will step up for such an effort. I’ve seen companies try to do this before, offering gift cards, swag, and even nothing in return, with varying grades of success and failure. I’m not saying a guest author program for your blog will always yield the results you are looking for, but it is a good way to help build relationships with your community, and help augment your existing workload, with some regular storytelling on the blog.
A guest blogger program is a tool I will be adding to my API communications research, expanding on the tools API operators have in their toolbox to keep their communication strategies active. An active blog does more than just educate your community, and boost your SEO. An active blog, that is informative, and relevant shows that there is someone home behind an API, and that they are investing in the platform. While there are exceptions, the clearest sign that an API will soon be deprecated, or does not have the resources to support consumers properly is when the blog hasn’t been updated in the last six months. While I’m reviewing, indexing, and learning about different APIs, when I come across an inactive blog, or Twitter account for an API, I’ll almost always keep moving, feeling like there really isn’t much worthwhile there, as it will soon be gone.
We all (well most of us) strive to deliver as stable of an API presence as we possibly can. It is something that is easier said than done. It is something that takes caring, as well as the right resources, experience, team, management, and budget to do APIs just right. It is something the API idols our there make look easy, when they really have invested a lot of time and energy into developing a agile, yet scalable approach to ensuring APIs stay up and running. Something that you might able to achieve with a single API, but can easily be lost between each API version, as we steer the ship forward.
I spend a lot of time at the developer portals of these leading API providers looking for interesting insight into how they are operating, and I though Stripe’s vision around versioning their API is worth highlighting. Specifically their quote about treating your API like they are real life physical infrastructure.
“Like a connected power grid or water supply, after hooking it up, an API should run without interruption for as long as possible.Our mission at Stripe is to provide the economic infrastructure for the internet. Just like a power company shouldn’t change its voltage every two years, we believe that our users should be able to trust that a web API will be as stable as possible.”
This is possible. This is how I view Amazon S3, and Pinboard. These are two APIs I depend on to make my business work. Storage and bookmarking are two essential resources in my world, and both these APIs have consistently delivered stable API infrastructure, that I know I can depend on. I think it is also interesting to note that one is a tech giant, while the other is a viable small business (not startup). Demonstrating for me that there isn’t a single path to being a reliable, stable, API provider, despite what some folks might believe.
I am spending a lot of time lately thinking of API infrastructure along the lines of our energy grid, and transit system. The analogies are not perfect, but I do feel like as time moves on, some of our API infrastructure will continue to become commoditized, deemed essential, and something we depend on just as much as power, gas, and other utilities. These are the APIs that will be sticking around, the ones that can prove their usefulness, and deliver reliable integrations that do not change with each funding season, or technology trend. I’m looking forward to getting beyond the wild west days of APIs, and moving into the stage where APIs are treated like they are infrastructure, not just some toy, or latest fad, and we can truly depend on them and build our businesses around.
I am still working through my notes from a recent visit to Capital One, where I spent time talking with Matthew Reinbold (@libel_vox) about their API governance strategy. I was given a walk through their approach to defining API standards across groups, as well as how they incentivize, encourage, and even measure what is happening. I’m still processing my notes from our talk, and waiting to see Matt publish more on his work, before I publish too many details, but I think it is worth looking at from a high level view, setting the bar for other API governance conversations I am engaging in.
First, what is API governance. I personally know that many of my readers have a lot of misconceptions about what it is, and what it isn’t. I’m not interesting in defining a single definition of API governance. I am hoping to help define it so that you can find it a version of it that you can apply across your API operations. API governance is at its simplest form, about ensuring consistency in how you do API across your development groups, and a more robust definition might be about having an individual or team dedicated to establishing organization-wide API standards, helping train, educate, enforce, and in the case of capital one, measure their success.
Before you can begin thinking about API governance, you need to start establishing what your API standards are. In my experience this usually begins with API design, but should also quickly also be about consistent, API deployment, management, monitoring, testing, SDKs, clients, and every other stop along the API lifecycle. Without well-defined, and properly socialized API standards, you won’t be able to establish any sort of API governance that has any sort of impact. I know this sounds simple, but I know more API providers who do not have any kind of API design, or other guide for their operations, than I know API providers who have consistent guides to design, and other stops along their API lifecycle.
Many API providers are still learning about what consistent API design, deployment, and management looks like. In the API industry we need to figure out how to help folks begin establishing organizational-wide API design guides, and get them on the road towards being able to establish an API governance program–it is something we suck at currently. Once API design, then deployment and management practices get defined we can begin to realize some standard approaches to monitoring, testing, and measuring how effective API operations are. This is where organizations will begin to see the benefits of doing API governance, and it not just being a pipe dream. Something you can’t ever realize if you don’t start with the basics like establishing an API design guide for your group. Do you have an API design guide for your group?
While talking with Matt about their approach at Capital One, he asked if it was comparable to what else I’ve seen out there. I had to be honest. I’ve never come across someone who had established API design, deployment, and management practices. Were actively educating and training their staff. Then actually measuring the impact and performance of APIs, and the teams behind them. I know there are companies who are doing this, but since I tend to talk to more companies who are just getting started on their API journey, I’m not seeing anyone organization who is this advanced. Most companies I know do not even have an API design guide, let alone measuring the success of their API governance program. It is something I know a handful of companies would like to strive towards, but at the moment API governance is more talk than it is ever a reality.
If you are talking API governance at your organization, I’d love to learn more about what you are up to. No matter where you are at in your journey. I’m going to be mapping out what I’ve learned from Matt, and compare with I’ve learned from other organizations. I will be publishing it all as stories here on API Evangelist, but will also look to publish a guide and white papers on the subject, as I learn more. I’ve worked with some universities, government agencies, as well as companies on their API governance strategies. API governance is something that I know many API providers are working on, but Capital One was definitely the furthest along in their journey that I have come across to date. I’m stoked that they are willing to share their story, and don’t see it as their secret sauce, as it is something that doesn’t just need sharing, it is something we need leaders to step up and show everyone else how it can be done.
I consider a road map for any API to be an essential building block, whether it is a public API or not. You should be in the business of planning the next steps for your API in an organized way, and you should be sharing that with your API consumers so that they can stay up to speed on what is right around the corner. If you want to really go the extra mile I recommend following what Tyk is up to, with their public road map using Trello.
With the API management platform Tyk, you don’t just see a listing of their API road map, you see all the work and conversation behind the road ma using the visual collaboration platform Trello. Using their road map you can see proposed features, which is great to see if something you want has already been suggested, and you can get at a list of what the next minor releases will contain. Plus using the menu bar you can get at a history of the changes the Tyk team has made to the platform, going back for the entire history of the Trello board.
Using Trello you can subscribe to, or vote up any of the message boards. If you want to submit something you need to sign-up and post something to the Tyk community. Then they’ll consider adding it to the proposed road map features. It is a pretty low cost, easy to make public, approach to delivering a road map. Sometimes this stuff doesn’t need a complex solution, just one that provides some transparency, and help your customers understand what is next. Tyk provides a nice way to provide a road map that any other API provider, or service provider can follow.
Another interesting approach to delivering an API road map that I can add to my research. I’m a big fan of having many different ways of delivering the essential building blocks of API operations, using a variety of simple free or paid SaaS tools. You’d be surprised at how useful an open road map can be for your API. Even if you aren’t adding too many new features, or have a huge number of people participating, it provides an easy reference showing what is next for an API. It also shows someone is home behind the scenes, and that an API is actually active, alive, and something you should be using.
Coming up with things creative things to write about regularly on the blog, and on Twitter when you are operating an API is hard. It has taken a lot of discipline to keep posts going up on API Evangelist regularly for the last seven years–totaling almost 3K total stories told so far. I don’t expect every API provider to have the same obsessive compulsive disorder that I do, so I’m always looking for innovative things that they can do to communicate with their API communities–something that Amazon Web Services is always good at providing healthy examples that I feel I can showcase.
One thing the AWS team does on a regular basis is tweeting out links to specific areas of their documentation, that helps users accomplish specific things with AWS APIs. The AWS security team is great at doing this, with recent examples focusing on securing things with the AWS Directory Service, and API Organizations. Each contains a useful description, attractive looking image, and a link to a specific page in the documentation that helps you learn more about what is possible.
I have been pushing myself to make sure all headers, and sub headers in my API documentation have anchors, so that I can not just link to a specific page, but I can link to a specific section, path, or other relevant item within my API documentation. This helps me in my storytelling when I’m looking to reference specific topics, and would help when it comes to tweeting out regular elements across my documentation in tweets. I’m slowly going to push out some of the lower grade tweets of curated news that I push out, and replace with relevant work I do in specific areas of my research–using my own work to fill the cracks over less than exciting things I may come across in the API space.
Tweeting out what is possible with your API, with links to specific sections of your API documentation is something I’m going to add to my API communication research. Providing a common building block that other API providers, or even API service providers can consider when looking for things to fill the cracks in their platform communication strategy. It is simple, useful to API consumers, and is an easy way to keep the Tweet stream regularly flowing, and helping developers understand what is possible. I feel it is also something that should impact how we craft our documentation, as well as our communication strategies, making sure we are publishing with appropriate titles and anchors so we can easily reference and cite the valuable information we are making available across our platforms.
I’ve been playing with Tensor Flow for over a year now, specifically when it comes to working with images and video, but it has been something that has helped me understand what things looks like behind the algorithmic curtain that seems to be part of a growing number of tech marketing strategies right now. Part of this learning is exploring beyond Google’s approach, who is behind Tensor Flow, and understand what is going on at AWS, as well as Azure. I’m stil getting my feet wet learning about what Microsoft is up to with their platform, but I did notice one aspect of the Azure Machine Learning Studio emphasized developers to, “publish, share, monetize” their ML models. While I’m sure there will be a lot of useless vapor ware being sold within this realm, I’m simply seeing it as the next step in API monetization, and specifically the algorithmic evolution of being an API provider.
As the label says in the three ML models for sale in the picture, this is all experimental. Nobody knows what will actually work, or even what the market will bear. However, this is something APIs, and the business of APIs excel at. Making a digital resource available to consumers in a retail, or even wholesale way via marketplaces like Azure and AWS, then playing around with features, pricing, and other elements, until you find the sweet spot. This is how Amazon figured out the whole cloud computing game, and became the leader. It is how Twilio, Stipe and other API as a product companies figured out what developers needed, and what these markets would bear. This will play out in marketplaces like Azure and Google, as well as startup players like Algorithmia–which is where I’ve been cutting my teeth, and learning about ML.
The challenge for ML API entrepreneurs will be helping consumers understand what their models do, or do not do. I see it as an opportunity, because there will be endless amounts of vapor ware, ML voodoo, and smoke and mirrors trying to trick consumers into buying something, as well as endless traps when it comes to keeping them locked in. If you are actually doing something interesting with ML, and it actually provides value in the business world, and you provide clear, concise, no BS language about what it does–you are going to do well. The challenge for you will be getting found in the mountains of crap that is emerging, and differentiating yourself from the smoke and mirrors that we are already seeing so much of. Another challenge you’ll face is navigating the vendor platform course set up by AWS, Google, and Azure as they battle it out for dominance–a game that many of us little guys will have very little power to change or steer.
It is a game that I will keep a close eye on. I’m even pondering publishing a handful of image manipulation models I’ve been working on. IDK. I do not think they are quite ready, and I’m not even entirely sure they are something I want widely used. I’m kind of enjoying using them in my own work, providing me with images I can use in my storytelling. I don’t think the ROI is there yet in the ML API game, and I’ll probably just keep being a bystander, and analyst on the sideline until I see just the right opportunity, or develop just the right model I think will stand out. After seven years of doing API Evangelist I’m pretty good at seeing through the BS, and I’m thinking this experience is going to come in handy in this algorithmic evolution of the API universe, where the magic of AI and ML put so many people under their spell.
I am helping a client think through their API management solution at the moment, so I’m working through all the moving parts of how, and why of API management solutions. The API management landscape has shifted since the last time I helped a small company navigate the process of getting up and running, so I wanted to work through each aspect and think critically before I make any recommendations. My client has a content API, which isn’t very complex, but possesses some pretty valuable data they’ve aggregated, curated, and are looking to make available via a simple web API. It is pretty clear that all developers will need a key to be access the API, but I wanted to pause for a moment and think more about API rate limiting.
Why do we rate limit? The primary reason is to help manage the compute resources available for all API consumers. You don’t want any single user hitting the server too hard, and taking things down for everyone else. I’d say after that, the next major reason is to enforce API access tiers, and ensure API consumers are only consuming what they should be. Which both seem like pretty dated concepts, that might need re-evaluation in general, but also in the context of this particular project. There is no free access to this API. I believe there will be a public account for test driving (making very limited # of calls), and some that drive their embeddable strategy, but for access to the majority of content, developers will have to register for a key, and provide a credit card to pay for their consumption. Which leaves me with the question, should we be rate limiting at all?
If users are paying for whatever they consume, and there is a credit card on file, do we want to rate limit? Why are we so worried about server capacity in a cloud world? It seems like rate limiting is a legacy constraint, that has continue to live on unquestioned, and even propped up by accounting and business decisions over simple technical ones. API access tiers with varying rate limits are sometime imposed as part of identity and access control, limiting what new users have access to, but often times they are used to corral and route users into specific, and measurable account plans, that help startups predict and articulate revenue to investors. I know many of my friends disagree with my thoughts on this, but I feel this accounting decision behind rate limiting are hurting the bottom line, more than they are helping. If you are focused on your API being the product it is hurting it, if you are focused on your API consumers being your product, then you are helping it.
My client in question is looking to build an actual business that sells a product to customers, without an exit strategy, so I want to do my best to help them understand how they can reduce technical and business complexity, while maximizing revenue around the API services they are offering. If we have the API properly resourced with scalable compute, load-balancing, monitoring, checks and balances. Then we also have a verified credit card on file for each API key holder. Why do we want to rate limit? It seems like it is an unnecessary complexity for API consumers to have to wrestle with. Let’s just allow them to register, make API calls, measure, and bill accordingly. Amazon provides a clear precedent for how this works, and from my experience I tend to spend more on my AWS bill then I do with services I use which keep me in tiered access plans. I’m not saying tiered access plans don’t have their place, I’m saying we should be questioning their value each time we are constructing them, and not just assuming they should be done my default.
A by-product of noticing how the API management landscape recently is helping me reassess each of the common building blocks of API management, and think more critically about the how and why behind their existence. There is a significant difference between rate limiting and metering API calls, and I don’t think we always have to do both. We still need the ability to turn off keys, and block specific user agents and IP addresses, but in some cases I think rate limiting shouldn’t be part of the API management operations. We have the compute, storage, and databases resources at our disposal to scale as we need to meet demand, and we have the credit card verified, and on file to bill against, let’s just get out of API consumers way. In the case of this particular project I’m working, I think this will be my recommendation. Focusing on reducing the amount of API management overhead, and simplifying the load for API consumers along the way.
It is interesting to take a fresh look at the API management landscape these days. It has been a while since I’ve looked through all the providers to see where their pricing is at, and what they offer. I’d say the space has definitely shifted from what things looked like 2012 through 2015. There are still a number of open source offerings, which there weren’t in 2012, but the old guard has solidly turned their attention to the enterprise. There are the cloud solutions like Restlet, ad SlashDB which really help you get up and running from existing data sources in the cloud, but for this particular project I am looking for a simple proxy and connector approach to deploying on any infrastructure, and they don’t quite fit the requirements.
Apigee, and the other more enterprise offerings have always been out of my league, and 3Scale’s entry level package is up to $750, which is a little out of my reach, but I do know they are open sourcing their offering, now that they are part of Red Hat. There is API Umbrella, APIMan, Fusio, Monarch, and handful of other solutions that will work, but they take certain platform, or specific language commitment that doesn’t work for this project. Everything else is of the enterprise caliber, nothing really that I would recommend to my customers who are just getting started on their API journey. I’m really left with the cloud giants, which I guess is one of the main reasons we are at this junction in the evolution of API management. API management becoming a commodity has really shifted the landscape, making it more difficult to be a strong player like Tyk and Kong are managing to pull off.
If my customer was looking to launch a data API from an existing database I’d point them to SlashDb or Restlet. If they are an enterprise customer I’d point them to 3Scale. Tyk is pretty much my goto person for the lower end of the market, with Kong as the alternate. If my customer is already running on Google, Azure, or AWS, then I’m pretty much telling them to stay put, and use the tooling that is available to them. Another thing I’m noticing has dropped out of prominence is the billing for API usage aspect of API management. It’s in Tyk, and 3Scale, but really wasn’t a component of many of the other solutions I’ve been looking at. Overall things seem scattered out there after the last round of API management acquisitions, the VC funding shifted, and the cloud giants stepped into their game with their solutions. I’m guessing that is the just the circle of life and all.
In this new landscape I am going to have to spend more time playing with the low-end solutions that are available. It is essential that there are solutions that are accessible to folks who are just starting on their API journey. It is something that requires a climb-able ladder. Meaning you need to be able to afford the reach to the next rung of the ladder, otherwise it can become quite a blocker. I was a big advocate for this in the early days, but stopped pushing on because there were so many options out there. It will take some playing around to get a better feeling about where we are, before I feel good about making recommendations to new players again. A process I should probably be repeating each year, because things seem to be shifting a little more than I anticipated.
Speaking to the House Energy and Commerce Committee, former Equifax CEO Richard Smith pointed the finger at a single developer who failed to patch the Apache Struts vulnerability. Saying that protocol was followed, and a single developer was responsible, shifting the blame away from leadership. It sounds like a good answer, but when you operate in the space you understand that this was a systemic failure, and you shouldn’t be relying on a single individual, or even a single piece of scanning software to verify the patch was applied. You really should have many layers in place to help prevent breaches like we saw with Equifax.
If I was interviewing the CEO, I’d have a few other questions for him, getting at some of the other systemic and process failures based upon his lack of leadership, and awareness:
- API Monitoring & Testing - You say the scanner for the Apache Struts vulnerability failed, but what about other monitoring and testing. The plugin in questions was a REST plugin, that allowed for API communication with your systems. Due to the vulnerability, extra junk information was allowed to get through. Where were your added API request and response integrity testing and monitoring process? Sure you were scanning for the vulnerability, but are you keeping an eye on the details of the data being passed back and forth? API monitoring & testing has been around for many years, and service providers like Runscope do this for a living. What other layers of monitoring and testing were in place?
- API Management - When you expose APIs like you did from Apache Struts, what does the standardized management approach look like? What sort of metering, logging, rate limiting, and analysis occurs on each endpoint, and verification occurs, ensuring that only required clients should have access? API management has been standard procedure for over a decade now for exposing APIs like this both internally and externally. Why didn’t your API management process stop this sort of breach after only a couple hundred record went out? API management is about awareness regarding access to all your resources. You should have a dashboard, or at least some reports that you view as a CEO on this aspect of operations.
These are just two examples of systems and processes that should have been in place. You should not be depending on a single person, or a single tool to catch this type of security incident There should be many layers in place, with security triggers, and notifications in place. Your CTO should be in tune with all of these layers, and you as the CEO should be getting briefed on how they work, and have a hand in making sure they are in place. I’m guessing that your company is doing APIs, but is dramatically behind the times when it comes to commonplace API management practices. This is your fault as the CEO. This is not the fault of a single employee, or any group of employees.
I am guessing that as a CEO you are more concerned with the selling of this data, than you are of securing it in storage, or transit. I’m guessing you are intimately aware of the layers that enable you to generate revenue, but you are barely investing in the technology and processes to do this securely, while respecting the privacy of your users. They are just livestock to you. They are just products on a shelf. It shows your lack of leadership to point the finger at a single person, or single piece of technology. There should have been many layers in place to catch this type of breach beyond a single vulnerability. It demonstrates your lack of knowledge regarding modern trends in how we secure and provide access to data, and you should never have been put in charge of such a large data brokerage company.
I am working with a client to develop a simple user interface on top of a Human Services Data API (HSDA) I launched for them. They want a basic website for searching, browsing, and navigating the organizations, locations, and services available in their API. A part of this work is helping them understand how modular and configurable their web site is, with each page, or portion of a page being a simple API call. It is taking a while for them to fully understand what they have, and the potential of evolving a web application in this way, but I feel like they are beginning to understand, and are taking the reigns a little more when it comes to dictate what they want within this new world.
When I first published a basic listing of human services they were disappointed. They had envisioned a map of the listings, allowing users to navigate in a more visual way. I got to work helping them see the basic API call(s) behind the listing, and how we could use the JSON response in any way we wanted. I am looking to provide three main ways in which I can put the API data to work in a variety of web applications:
- Server-Side - A pretty standard PHP call to the API, taking the results and rendering to the page using HTML.
- Static Push - Calling the APPI using PHP, then publishing as YAML or JSON to a Jekyll site and rendering with Liquid and HTML.
After moving past the public website, I’m beginning to show them the power of not just GETs via their API, I’m showing them the power of POST, PUT, and DELETE. I find it is easy to show them the potential in an administrative system using a basic form. I find people get forms. Once you show them that in order to POST they have to have a special token or key, otherwise the API will reject, they feel a whole lot better about the process. I find the form tends to put things into context for them, beyond displaying data and content, and allowing them to actually manage all this data and content. I find the modularity of an API really lends itself to giving business users more control over the user interface. They may not be able to do everything themselves, but they tend to be more invested in the process, and enjoy more ownership over the process–which is a good thing.
I’ve covered this topic several times before, but I figured I’d share again for folks who might have just become readers int he last year. Providing an overview of how API Evangelist works, to help eliminate confusion as you are navigating around my site, as well as to help you find what you are looking for. First, API Evangelist was started in the summer of 2010 as a research site to help me better understand what is going on in the world of APIs. In 2017, it is still a research site, but it has grown and expanded pretty dramatically into a network of websites, driven by a data and a content core.
The most import thing to remember is that all my sites run on Github, which is my workbench in the the API Evangelist workshop. apievangelist.com is the front door of the workshop, with each area of my research existing as its own Github repository, at its own subdomain with the apievangelist domain. An example of this can be found in my API design research, where you will find at design.apievangelist.com. As I do my work each day, I publish my research to each of my domains, in the form of YAML data for one of these areas:
- Organizatons - Companies, organizations, institutions, programs, and government agencies doing anything interesting with APIs.
- Individuals - The individual people at organizations, or independently doing anything interesting with APIs.
- News - The interesting API related, or other news I curate and tag daily in my feed reader or as I browse the web.
- Tools - The open source tooling I come across that I think is relevant to the API space in some way.
- Building Blocks - The common building blocks I find across the organizations, and tooling I’m studying, showing the good and the bad of doing APIs.
- Patents - The API related patents I harvest from the US Patent Office, showing how IP is impacting the world of APIs.
You can find the data for each of my research areas in the _ data folder for each repository. Which is then rendered as HTML for each subdomain using Liquid via each Jekyll CMS driven website. All of this is research. It isn’t meant to be perfect, or a comprehensive directory for others to use. If you find value in it–great!! However, it is just my research laying on my workbench. It will change, evolve, and be remixed and reworked as I see fit, to support my view of the API sector. You are always welcome to learn from this research, or even fork and reuse it in your own work. You are also welcome to submit pull requests to add or update content that you come across about your organization or open source tool.
The thing to remember about API Evangelist is it exist primarily for me. It is about me learning. I take what I learn and publish as blog posts to API Evangelist. This is how I work through what I’m discovering as part of my research each day, and use as a vehicle to move my understanding of APIs forward. This is where it starts getting real. After seven years of doing this I am reaching 4K to 7K page views per day, and clearly other folks find value in reading, and sifting through my research. Because of this I have four partners, 3Scale, Restlet, Runscope, and Tyk who pay me money to have their logo on the home page, in the navigation, and via a footer toolbar. Runscope also pays me to place a re-marketing tag on my site so they can show advertising to my users on other websites, and Facebook. This is how I pay my rent, an how I eat each month.
Beyond this base, I take my research and create API Evangelist industry guides. API Definitions, and API Design are the two most recent editions. I’m currently working on one for data, database, as well as deployment, and management. These guides are sometimes underwritten by my partners, but mostly they are just the end result of my API research. I also spend time and energy taking what I know and craft API strategy and white papers for clients, and occasionally I actually create APIs for people–mostly in the realm of human services, or other social good efforts. I’m not really interested in building APIs for startups, or in service of the Silicon Valley machine. Even tough I enjoy watching, studying, and learning from this world, because there are endless lessons regarding how we can use technology in this community, as well as how we should not be using technology.
That is a pretty basic walk through of API Evangelist works. It is important to remember I am doing this research for myself. To learn, and to make a living. API Evangelist is a production, a persona I created to help me wade through the technology, business, and politics of APIs. It reflects who I am, but honestly is increasingly more bullshit than it is reality, kind of like the API space. I hope you enjoy this work. I enjoy hearing from my readers, and hearing how my research impacts your world. It keeps me learning each day, and from ever having to go get a real job. It is always a work in progress and never done. Which I know frustrates some, but I find endlessly interesting, and is something that reflects the API journey, something you have to get used to if you are going to be successful doing APIs in this crazy world.
I am a big fan of user interfaces that bring APIs out of the shadows. Historically, APIs are often a footnote in the software as a service (SaaS) world, available as a link way down at the bottom of the page, in the settings, or help areas. Rarely, are APIs made a first class citizen in the operations of a web application, which really just perpetuates the myth that APIs aren’t for everybody, and the “normals” shouldn’t worry their little heads about it. When in reality, EVERYBODY should know about APIs, and have the opportunity to put them to work, so we should stop burying the links to our APIs, and our developer areas. If your API is too technical for a business user to understand what is going on, then you should probably get to work simplifying it, not burying it and keeping it in developer and IT realm.
I have written before about how DNS provider CloudFlare provides an API behind every feature in their user interface, and I’ve found another great example of this over at the network API provider Kentik. In their network dashboard visualization tooling they provide a robust set of tooling for accessing the data behind the visuals, allowing you to export, view SQL, show API call, and enter share view. In their post, they proceed to instruction about how you can get your API key as part of your account, as well as providing a pretty robust introduction into why APIs are important. This is how ALL dashboards should work in my opinion. Any user should be introduced to APIs, and have the ability to get at the data behind, and export it, or directly make an API call in their browser or at the command line.
Developers like to think this stuff should be out of reach of the average user, but that is more about our own insecurities, and power trips, than it is about the average users ability to grasp this stuff. There is no reason why ALL user interfaces can’t be developed on top of APIs, with native functionality for getting at the API call, as well as the data, content, or algorithms behind the user interface feature. It makes for more powerful user interfaces, as well as more educated, literate, and efficient power users of our applications. If all web applications operated this way, we’d see a much more API literate business world, where users would be asking more questions, curious about how things work, and experimenting with ways they can be more successful in what they do. While I do see positive examples like Kentik out there, I also find that many web application developers are further retreating from APIs being front and center, preferring to keep them in the shadows of web and mobile applications, out of the reach of the average user. Something we need to reverse.
|<< Prev||Next >>|