Varnish is a robust caching web service used by many high profile and high traffic websites. Acquia uses Varnish to help end users retrieve web sites faster and to help keep the load down on your servers. Once a page is in the cache performance will be fast, but what if you need to make a quick content adjustment? Now that the cached content is no longer up to date and needs to be cleaned, how can you keep your cache fresh?

What we did for our Drupal site is setup Cache Expiration with Purge modules to connect to our Varnish nodes and keep all our pages up to date. Most module like Varnish and Purge will integrate some page cleaning, but these modules don’t work for cleaning Views or Panels caches. What Cache Expiration offers is a more configurable options to act on and utilize the Workflow module to update our Views. With this extra control we were more comfortable to extend our Caching time beyond what Drupal allows to you select from the pull down to 7 days. With a quick config line change in sites.php we can setup the cache time

$conf['page_cache_maximum_age'] = 84600*7;

Now that we have our pages setting the cache header to 7 days, we need to setup one of the many different Purge options that Cache Expiration offers. After install Cache Expiration go to admin/config/system/expire and select External Expiration as we are wanting to connect to our external cache server. If you are going to use Acquia Purge or Varnish uncheck “Include Base URL in Expires”. The only sections I really needed to worry about were Node Expiration and Menu Links Expiration as we don’t use Comments and our User pages are not their Drupal user account pages. For files we use Apache to send file type headers for how long the cache server can hold onto those and all of the file uploads will append _[0-9] to the file name. For Node Expiration we set all three actions Insert, Update, Delete to trigger a page cache purge. Depending on your setup you might be fine with the basic Front Page and Node Page being purged but I wanted to do something al little different and did Custom with these two URLS

http://[site:url-brief]/node/[node:nid]
http://[node:url:brief]

The next setting that we setup was the Menu Links Expiration section. Again all actions are checked. For menu Depth, it really depends on your menu structure and what menus you have content on that will be updated more frequently, for us Main Menu with depth of 1 was all I needed.

Definition of what Menu Depth is from the maintainer:

The goal is to easily arrange for a high visibility menu to be consistent and current on all the pages the menu links to. So, if you are using a menu block with a depth of 2, you can configure this plugin to clear the URLs linked to by said menu block.

Now that we have all our nodes setup to purge when content is change as well as our menus its time to move on to our Views. The way we setup our rules to purge our Views is One rule per View, so for Feature Story we have one rule that will purge the Home page and our All Stories page.

Create a new Rule and use two events as the trigger, After Saving New content and After Updating Existing Content now with these events we filtered them by a specific content type so that way the rule will only fire for this specific pice of content, for this example I will use Feature Story as the content type. We didn’t need any conditions so that was left as None. Finally down to the part that matters, the Actions. Add a new action Clear URL(s) from the Page Cache. As soon as you select this action the page will update and present you with a text box to enter in your different URL(S) you wish to purge. To continue with the Feature Story example, our site only had one URL that needed to be purge

http://[site:url-brief]/feature-story

Save and that’s it. Any time someone updates a Feature Story on your site, the Node Page and the View’s page will now be wiped from the Cache server and the next request to those pages will now cache the newest content.

For the home page I have a special rule to purge it from the cache anytime anything is updated on the site, you might want to make modifications to this to suite your needs.


 

Modules:

Cache Expiration: https://www.drupal.org/project/expire

Purge: https://www.drupal.org/project/purge

Workflow: https://www.drupal.org/project/workflow

Varnish: https://www.drupal.org/project/varnish

Acquia Purge: https://www.drupal.org/project/acquia_purge

This article summarizes the current security solutions for  Docker containers. The solutions in this blog post have been discussed and designed by the Docker community. You can also find valuable tips on how to enhance security while running a Docker in a production environment.

Possible Security Issues in a container-based environment

Before we jump into the security solutions, let’s explore some security issues of container-based systems. Generally speaking, there are three types of attack models, which are caused by the vulnerabilities of the container-based systems.

Types of Attacks:

  • Container compromise: result in illegitimate data access and affect control flow of instructions
  • DoS(Deny of Services): disturb normal operation of the host or other container
  • Privilege escalation: obtain a privilege which is not originally granted to the container

Disclosed Vulnerabilities:

  • Namespacing Issues -Docker containers utilize Kernel namespaces to provide a certain level of isolation. However, not all resources are namespaced:
    • UID: Causing “root” user vulnerability
    • Kernel keyring: containers running with a user of the same UID will have access to the same keys if they are handled by kernel keyring
    • Kernel & its modules: Loaded modules become available across all containers and the host
    • Devices: includes disk drives, sound-cards,GPU, etc.
    • System time: The SYSTEM_TIME capability is disabled by default, but if it’s enabled, we will need to worry about it.
  •  Kernel Exploit – Container-based applications share the same host kernel, namely, flaws in  the host kernel might allow malicious containers to escape and gain access over the over whole system.
  • DoS Attacks – Since all containers share kernel resources, if a container or user consumes too much capacity of a certain resource, it will starve out other containers on the host.
  • Container Breakout – Because users are not namespaced, any process that breaks out of the container will have the same privileges on the host as it did in the container. For example, if you were root in the container, you will be root on the host. It’s a typical privilege escalation attack , unlikely to happen, but possible.
  • Poisoned Images – It’s possible for attackers to modify/embed malicious programs into the image and trick users to download such corrupt images
  • Compromising secrets – Applications need credentials to access databases or backend services. An attacker who can get access to these credentials will also have the same access as the application. This problem becomes more acute in a microservice architecture in which containers are constantly stopping and starting.

Current Solutions:

Now Let’s take a look at what security solutions that come with the current Docker implementation and what strategies or techniques can be used in production.

Least Privileges

One of the most important principles to achieve container security is Least Privileges: each process and container should run with the minimum set of access rights and resources it needs to perform its function. This includes the actions to reduce the capabilities of containers:

–  Do not run processes in a container as root to avoid root access from attackers.

–  Run filesystems as read-only so that attackers can not overwrite data or save malicious scripts to file.

–  Cut down the kernel calls that a container can make to reduce the potential attack surface.

–  Limit the resources that a container can use

This Least Privileges approach reduces the possibility that an attacker can access or exploit data or resources via a compromised container.

Internal Security Solutions

Containers can leverage the Linux Namespace and Control group to provide a certain level of isolation and resource limitation.

Namespace

Docker provides process, filesystem, device, IPC and network isolations by using the related namespace.

  • Process Isolation: Docker utilizes PID namespace to separate container processes from the host as well as other containers, so that processes in a container can’t observe or do anything to the other processes running in the host or in other containers.
  • Filesystem Isolation: Use mount namespace to ensure that for each mount space, a container only have impact inside the container.
  • Device Isolation: The container cannot access to any devices unless it’s privileged.
  • IPC Isolation: Utilize IPC namespace to prevent the processes in a container from interfering with those in other containers.
  • Network Isolation: Use network namespace so that each container has its own IP address, IP routing tables, network device, etc.

Control Group

Docker employs Cgroup to control the amount of resources, such as CPU, memory, and disk I/O, that a container can use. Under this control, each container is guaranteed a fair share of the resources but preventing from consuming all of the available resources.

Linux Kernel Security Systems

The kernel security system is present to harden the security of a Linux host system. We can also use them to secure the host from containers.

By default, containers disable a large amount of Linux capabilities from its containers in order to prevent an attacker to damage the host system when a container is compromised. And it also allows configuration of capabilities that a container can use.

Linux Security Module (LSM)

Two most popular LSM will be AppArmor and SELinux:

  • SELinux is a labeling system, that implements Mandatory access control using labels. Every object, such as process, file/directory, network ports, devices, etc, has a label. Rules are put in place to control the access to objects.
  • AppArmor is a security enhancement model to Linux-based on Mandatory Access Control like SELinux. It permits the administrator to load a security profile into each program, which limits the capabilities of the program.

Another Approach

Seccomp

The Linux seccomp (secure computing mode) facility can be used to restrict the system calls that can be made by a process. namely, containers can be locked down to a specified set of system calls.

In Production

When running Docker in a production environment, you will want to leverage one of the security solutions listed above and apply proper precautions to provide a more secure and robust system. There are three major security tips in to keep in mind when running Docker in production.

Segregate Containers by Host

The main reason to place each user on a separate Docker Host is to minimize the loss when container breakout happens. If multiple users are sharing one host, if a user monopolizes all the memory on the host, it will starve out other users. Even worse, if container breakouts happen, a user could possibly gain access to another users’ containers or data through the compromised container.

Therefore, although this approach is less efficient than sharing hosts between users and will result in a higher number of VMs and/or machines than reusing hosts, it’s important for security.

Another similar solution would be separate containers with sensitive  information from less-sensitive ones for the similar reason.

Applying Updates

Just like what is recommended for Windows system, it’s recommended to apply updates regularly. This includes updating base images and dependent images to fix the vulnerabilities in common utilities and framework. At times, we need to update Docker daemon to gain access to new feature, security patches or bug fixes. Removing unsupported drivers is also important, because those could be a security risk since they won’t be receiving the same attention and updates as other parts of Docker.

Image Provenance

To safely use images, you need to have guarantees about their provenance:

  • where they came from
  • who created them
  • ensure you are getting the exactly the image you want

There are three solutions  for image provenance: secure hash, secure signing and verification infrastructure and use Dockerfile properly.

  • Secure Hash:  Secure Hash is like a fingerprint for data. It’s a small string that is unique to given data. If you have a secure hash for some data and the data itself, you can recalculate the hash for the data then compare.  In docker, it’s called docker digest, a SHA-256 hash of a filesystem layer or manifest (a metadata file describing the parts of an image, containing a list of constituent layer identified by digest)
  • Secure Signing and Verification Infrastructure:  Data could be changed / copied if it travels over unsecure channels (e.g. HTTP), so we need to ensure we are publishing and accessing content using secure protocols. Notary project is an ongoing secure signing and verification infrastructure project in docker, which compares a checksum for a downloaded file with the checksum in Notary’s trusted collection for the file source (e.g. docker.com). For more details, please check https://github.com/docker/notary
  • Dockerfile:  Not as we expected, dockerfile is likely to produce different images over time, so as time goes, it’s hard to be sure what is in your images. To use docker properly, you would:
    • Always specify a tag in FROM instruction, and use digest to pull the exactly same image each time
    • Provide version numbers when installing software from package managers. However, since package dependencies can change over time, sometime we need to use tools (e.g. aptly) to take a snapshot of the repository
    • Verify any software or data downloaded from the internet by using checksums or cryptographic signatures.

 

This blog post is a glance of the current security solutions for docker containers, if you are interested, please refer to the reference articles for more details. Are you using Docker in production? Have you implemented some of these security models?

References

[1] Analysis of Docker Security

[2] Docker Security – Using Containers Safely in Production

[3] Docker Doc – Docker Security

Web Application Programming Interfaces (APIs) allow us to share data between systems while preventing leaking of low level details that would otherwise cause tight-coupling between systems. These APIs are just like any application, with the small difference that they don’t have an end-user GUI. Instead, APIs focus on gathering data from backend(s) and performing operations on this data while providing a standard and consistent interface to these operations. APIs have the same need as regular applications when it comes to iterative planning/design and user feedback. This is where swagger (OpenAPI Specification) comes in.

The Swagger design language work started in 2010 as a framework to document and describe Web APIs. On January 1, 2016, it was renamed to OpenAPI Specification. This rename was part of converting the Swagger project into one of the Linux Foundation Collaboration projects, which have more involvement from vendors and community on direction of the toolset and the design language.

OpenAPI Specification allows us to use json or yaml to describe our web API endpoints (urls), their parameters, response body and error codes. Before the OpenAPI Specification existed people would use text files, word or other non-web API friendly formats to document their APIs.

When OSU began our API development efforts, we wanted to have a communication and feedback cycle with OSU developers. Using the OpenAPI Specification (swagger), we can use a tool such as the swagger editor (http://editor.swagger.io/#/) and make changes to the documentation of the API in real time while we talk to developers on campus. This allows us to make changes to the visible documentation of an API without having to implement it or spend a lot of time developing a separate structure or document.  We can make a change directly to the yaml file, which is faster than having to adjust already implemented APIs.

Information Services researched a variety of tools to describe APIs. We looked at: OpenAPI Specification, RAML (http://raml.org/) , API Blueprint (https://apiblueprint.org/) and I/O Docs (https://github.com/mashery/iodocs). At first, from a technical perspective, RAML was the most attractive design language when we compared it to OpenAPI Specification, but version 2 of OpenAPI specification addressed the v1 downsides. OpenAPI specification had the greatest user base with a huge community of developers online and along with that vendor support and OpenSource tools/frameworks that supported it.

The benefits of OpenAPI Specification are:

  • Online editor – provides a wysiwyg for the API. Easy to make changes and see the output.
    swagger editor
  • Mock server – you can describe your API and have a mock/test server endpoint that returns test data. This is helpful when testing APIs.
    server generation screenshot
  • Client code – sample code that can be used to test APIs and use the APIs in a variety of languages.
    client generation code
  • Vendor/OSS support – a variety of open source tools, frameworks and vendor offerings that work with OpenAPI Specification made it the de facto language to document APIs.

Our API development cycle is:

  1. Talk to stake-holders and data owners.
  2. Design API (using OpenAPI Specification).
  3. Collect Feedback.
  4. Implement.
  5. Release as Beta & collect feedback.
  6. Release to Production.
  7. Go back to first step.

These steps are similar to the application development cycle. The key component of our API development is listening to our community of developers. The APIs are built for developers and using OpenAPI Specification to design the API initially with the developers in mind allows us to collect feedback right away and early-on. Before we start implementation of an API, we have a really good idea of what the developers need and the design has been validated by API consumers (OSU developers), stake-holders and data owners.

Our API source code is hosted in github and the OpenAPI Specification is treated just like code and is included along with the source code. We treat this documentation just like we do code and documentation. The API Gateway that we use (apigee.com) allows us to upload our OpenAPI Specification yaml file and it creates the documentation pages needed for our APIs. This process streamlines our documentation while also preventing us from locking ourselves to a single vendor. If, down the road, we need to move to another API gateway solution, we will be able to re-use our OpenAPI Specification yaml files to create and document our APIs.

OpenAPI specification has been quick for our team to learn. Our students are able to pick it up after a few hours. Starting from a sample file it is easy to modify it to document new APIs. Once a person has experience with OpenAPI Specification, in less than 30 minutes we can have a design document that we can share with developers for feedback. This enables us to develop APIs faster and keep our developers happy. Faster development and happy developers? That’s a win.

Posted in API.

Web Application Programming Interfaces (APIs) allow us to share data between systems while preventing leaking of low level details that would otherwise cause tight-coupling between systems. These APIs are just like any application, with the small difference that they don’t have an end-user GUI. Instead, APIs focus on gathering data from backend(s) and performing operations on this data while providing a standard and consistent interface to these operations. APIs have the same need as regular applications when it comes to iterative planning/design and user feedback. This is where swagger (OpenAPI Specification) comes in.

The Swagger design language work started in 2010 as a framework to document and describe Web APIs. On January 1, 2016, it was renamed to OpenAPI Specification. This rename was part of converting the Swagger project into one of the Linux Foundation Collaboration projects, which have more involvement from vendors and community on direction of the toolset and the design language.

OpenAPI Specification allows us to use json or yaml to describe our web API endpoints (urls), their parameters, response body and error codes. Before the OpenAPI Specification existed people would use text files, word or other non-web API friendly formats to document their APIs.

When OSU began our API development efforts, we wanted to have a communication and feedback cycle with OSU developers. Using the OpenAPI Specification (swagger), we can use a tool such as the swagger editor (http://editor.swagger.io/#/) and make changes to the documentation of the API in real time while we talk to developers on campus. This allows us to make changes to the visible documentation of an API without having to implement it or spend a lot of time developing a separate structure or document.  We can make a change directly to the yaml file, which is faster than having to adjust already implemented APIs.

Information Services researched a variety of tools to describe APIs. We looked at: OpenAPI Specification, RAML (http://raml.org/) , API Blueprint (https://apiblueprint.org/) and I/O Docs (https://github.com/mashery/iodocs). At first, from a technical perspective, RAML was the most attractive design language when we compared it to OpenAPI Specification, but version 2 of OpenAPI specification addressed the v1 downsides. OpenAPI specification had the greatest user base with a huge community of developers online and along with that vendor support and OpenSource tools/frameworks that supported it.

The benefits of OpenAPI Specification are:

  • Online editor – provides a wysiwyg for the API. Easy to make changes and see the output.
  • Mock server – you can describe your API and have a mock/test server endpoint that returns test data. This is helpful when testing APIs.
  • Client code – sample code that can be used to test APIs and use the APIs in a variety of languages.
  • Vendor/OSS support – a variety of open source tools, frameworks and vendor offerings that work with OpenAPI Specification made it the de facto language to document APIs.

Our API development cycle is:

  1. Talk to stake-holders and data owners.
  2. Design API (using OpenAPI Specification).
  3. Collect Feedback.
  4. Implement.
  5. Release as Beta & collect feedback.
  6. Release to Production.
  7. Go back to first step.

These steps are similar to the application development cycle. The key component of our API development is listening to our community of developers. The APIs are built for developers and using OpenAPI Specification to design the API initially with the developers in mind allows us to collect feedback right away and early-on. Before we start implementation of an API, we have a really good idea of what the developers need and the design has been validated by API consumers (OSU developers), stake-holders and data owners.

Our API source code is hosted in github and the OpenAPI Specification is treated just like code and is included along with the source code. We treat this documentation just like we do code and documentation. The API Gateway that we use (apigee.com) allows us to upload our OpenAPI Specification yaml file and it creates the documentation pages needed for our APIs. This process streamlines our documentation while also preventing us from locking ourselves to a single vendor. If, down the road, we need to move to another API gateway solution, we will be able to re-use our OpenAPI Specification yaml files to create and document our APIs.

OpenAPI specification has been quick for our team to learn. Our students are able to pick it up after a few hours. Starting from a sample file it is easy to modify it to document new APIs. Once a person has experience with OpenAPI Specification, in less than 30 minutes we can have a design document that we can share with developers for feedback. This enables us to develop APIs faster and keep our developers happy. Faster development and happy developers? That’s a win.

Posted in API.