TNS
VOXPOP
Tech Conferences: Does Your Employer Pay?
Does your employer pay for you to attend tech conferences?
Yes, registration and travel are comped.
0%
Yes, just registration but not travel expenses.
0%
Yes, travel expenses but not registration.
0%
Only virtual conferences.
0%
No reimbursement.
0%
CI/CD / Cloud Native Ecosystem / Containers / DevOps / Tech Culture

How Operable Built its ChatOps Cog Platform on Docker Images

Feb 14th, 2017 10:44am by
Featued image for: How Operable Built its ChatOps Cog Platform on Docker Images

To develop and package its cutting edge ChatOps tool, start-up Operable turned to Docker containers both to speed the process of developing the software and make it easy for customers to deploy the tool.

On the stage at RedMonk’s Monki Gras event in London, Operable Chief Technology Officer and co-founder Kevin Smith shared details of how his company built its ChatOps platform Cog.

ChatOps is slang for conversation-driven development which combines tools and conversations, utilizing a chatbot that you then customize with plugins and scripts, all designed for your team to collaborate and automate better.

ChatOps is arising as one of those important tools to solve the problem of distributed team collaboration, whether it’s across tools, languages or locations. Before you may have been tracking down colleagues to figure out who did what, but now chatbots work a transparent manner, tracking it all. Plus, it all comes with a lovely amount of automation, which assists in your continuous delivery process.

Whether you’re collocated or distributed, your team is probably already communicating via a chat client like Slack. Now imagine that inside delegated chat channels, you can track everyone’s actions and notifications, as well as any bugs, testing results, etc.

“I think Docker’s biggest innovation is Docker images” — Kevin Smith, Operable.

There are a variety of easily customizable open source chatbots — Hubot, Lita and Errbit. But, as Smith pointed out that, “ChatOps is really only as good as the commands you’ve built for it, really only as good as its plug-in API.” Enter Cog.

Operable’s Cog in action

Smith’s talk wasn’t a product pitch, but rather using the story of how his company built Cog. He kicked it off by talking about how we spend a lot of time focusing on infrastructure, applications and automation management, but we often neglect the human factor.

Cog is a Unix-inspired shared automation shell or “modern shrink wrap for software,” which is packaged as Docker images and built with a plugin API on Docker.

Why Docker? Smith said that “Docker’s big innovation is not containers — they are nice and incredibly useful but it’s not their core innovation. And it’s not their easy deployments either,” which he calls a byproduct of the innovation.

“I think Docker’s biggest innovation is Docker images,” which he called “executable packages.” He says this is because the Docker images are:

  • Self-contained
  • Executable
  • Inspectable and testable
  • Cross-platform

The company chose to build their tool around shipping Docker images because, at the start of their platform, Docker images were downloaded vastly more than other images. Docker chose to still release their source code but to stop using open source packages.

“We build Docker imaging packages and we’ve changed out whole pipeline to suit that purpose,” Smith said.

For his team, the benefits of using Docker images as shipping packages is:

  • Full control over entire environment, but saying that since they have this control, it’s up to them as the vender to release things on a timely basis.
  • Standardized configuration via 12-factor app methodology, so all of their configuration can be done through all environment variables.
  • Images are easily extendible to clustered environments, including Kubernetes and Swarm, supporting these environments soon.
  • Simplified support and debugging, using Alpine Linux as 5mg image, and their whole product is 22ms.

Next, Smith described the configuration and design necessary to build Operable’s ChatOps platform:

  • Environment variables not files.
  • Sane defaults are important — you should be able to get the basics working without having to twiddle a lot.
  • Directory mounts for user customization, so data is consistent across computer restarts.

“All of this has us being extremely mindful about how we manage application state,” he said.

The development team versions all their Docker artifacts right along with the code that it manages so there’s an audit trail to see just how the code is changed and how they’ve updated the config. “So when you release something, you have a high amount of faith it’s just going to work,” Smith said.

The container ecosystem provided the development team with the ability to do extreme dogfooding: “We do everything in Docker, unit testing, functional testing, integration testing, right inside Docker,” Smith said. We don’t do anything natively on our computer. This enables us to hit rough edges way before the user does and we run integration tests via docker-compose, in a way the users are going to interact with that.”

He said that using Docker solves a lot of design challenges for the API for ChatOps, using the following workflow:

  1. Installation: Simply pull image from repo.
  2. Upgrades: Compare image tags, pull image, restart container.
  3. Configuration: Deploy 12 Factor environment variables.
  4. Uninstallation: kill container process and move image.
  5. Execution: start a container process — Docker exposes some nobs so you have control over what it can do.

How it all works in practice for the Cog development team:

  • They didn’t use the Docker CLI tool “because [it] is underperforming and not a thing we wanted to do in the long term.”
  • Because Docker is open source they just grabbed the code, writing their own Docker client that talks to the REST API, exposed via the Docker daemon.

“This is how we assure that this command runner is always available — and we don’t have to bother the user with any of the implementation details,” Smith said.

He said following this process, Cog’s performance is at:

  • 200 ms for container creation.
  • Practically no process execution overhead.
  • 90 ms for container destruction.

In the end, Smith described his tool and the ChatOps operation they’ve built on as scalable for creating and destroying containers, while the I/O input-output is stagnant. He then summarized the following pros and cons of this process:

He did note there were several downsides of building a ChatOps platform based on Docker images: Containers still have poor container startup scalability, so cache whenever possible, Smith advised. Also, Docker still needs to further document the container I/O API. “Like most open source, the documentation is strongly lacking,” Smith said.

The upside of building a ChatOps platform based on Docker images:

  • Low documentation needed to be created for user education effort, since many users know Docker already.
  • You have a defined image format with good tooling.
  • You have a secure process execution.
  • Docker repositories can act as a discovery mechanism.

In the end, it’s not perfect, but it was by far the most perfect for Cog’s needs.

Feature image from Operable.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Docker, Simply, Kubernetes.
TNS DAILY NEWSLETTER Receive a free roundup of the most recent TNS articles in your inbox each day.