APIs available on our developer portal use Apigee as an API gateway. Apigee proxies our API traffic to handle functionality like authorization, authentication, caching, and load balancing, among other things. Our current license with Apigee allows us 8 million API calls per month. This quota has been able to give us a comfortable amount of headroom since we started using Apigee in 2015, but since the start of this academic year, we’ve been getting too close to our 8 million limit (which is a good problem to have!). We don’t expect API traffic to slow down, so we are changing are plan with Apigee to increase our monthly quota. As apart of this upgrade, Apigee is also upgrading our gateway to use better infrastructure which is hosted on Amazon Web Services. Our new agreement with Apigee means that our monthly quota is increased to 300 million calls per month and we are getting a higher runtime SLA.

One downside to this upgrade is an outage required from Apigee. Apigee has done these types of upgrades to its other customers before, so they envision the outage to last 10-45 minutes. We have scheduled this outage on Thursday, November 2nd, at 7:00AM (please see the update below for the new upgrade date). This will be a total outage of all our APIs for api.oregonstate.edu including the development and test environments.

Apologies for any inconvenience caused from this outage, but thank you for bearing with us as we aim to improve our APIs and adapt to increasing traffic.


As of 10:00AM November 2nd, all APIs are back online. The upgrade was unsuccessful for Apigee and they had to rollback the changes they were performing. This upgrade will be performed at a future date and time. Apigee identified the problem that prevented the upgrade and will be addressing it before the next upgrade. We apologize that the outage lasted longer than expected. We will be working with Apigee to make sure the future upgrade causes less interruption for API traffic.

Apigee is going to try the upgrade again on Thursday, November 9th, at 6:00PM. The problem with the previous upgrade attempt was found to be related to access tokens. Apigee was copying access tokens which was slower than normal, so they decided to abort the upgrade. For the upgrade scheduled on November 9th, Apigee is not going to copy access tokens. We decided that since most access tokens expire in an hour or less, skipping access tokens reduces the outage time and risk associated with this upgrade.

Final Update

Apigee successfully completed the upgrade and all API traffic was returned to service at 6:20PM, November 9th. Thanks for bearing with us!

One of our most popular API’s is the locations API. The locations API is used to get campus buildings, extension campus locations, and dining locations on campus. Since the word “location” can be used to describe many types of places, we actively source new locations to add to the API and discover new data to add to existing locations. While sourcing new locations and data, we work with data stewards to ensure the data we are providing is accurate and true. One example of enhancing existing locations in the API is the recent addition of building geometries to Corvallis campus buildings.


Initially, campus buildings in the locations API included a coordinate pair in their data which represents the centroid of a building. This can be useful as an alternative to the building’s address to place a point on a map to represent the building’s location. Better yet, the coordinates can be used to query against by specifying lat and lon query parameters in the URL of a locations API request. Using these parameters queries buildings that are close to the coordinates provided in the URL. Use the distance and distanceUnit query parameters for a more specific query.

Here’s an example of a locations API request that returns all locations that are within 300 yards of the Valley Library:



Centroid coordinates are useful for performing actions related to the distance, but what if you want to draw the shape of a building on map? A new dataset we recently added to buildings is geometry coordinates. Geometry coordinates can be used with services like the Google Maps API to draw building shapes on a map. A good open source alternative to the Google Maps API is Leaflet which can also map coordinates from the locations API.

Buildings in the locations API now have a geometry object which follows the GeoJSON specification for a geometry object. Within the geometry object is type and coordinates. Type will either be Polygon or MultiPolygon, depending on the location. Locations that have multiple physical structures will be MultiPolygon (like Magruder Hall) and Polygon is for a location that only has one structure. Most buildings on campus are polygon locations.

Let’s take a closer look at a simple polygon location, Hovland Hall:

"geometry" : {
"type" : "Polygon",
"coordinates" : [ [ [ -123.281543, 44.566486 ], [ -123.281544, 44.56636 ], [ -123.281041, 44.566359 ], [ -123.281041, 44.566485 ], [ -123.281543, 44.566486 ] ] ]

Coordinates for a polygon location will be a 3 dimensional array of coordinate pairs, where index [0] of the 3rd level of the array will be longitude and index [1] will be latitude. The 2nd level of the array represents an array of coordinate pairs otherwise known as a ring. The 1st level of the array represents an array of rings. Each ring represents a set of coordinate pairs that, if connected to each other in order, would draw a shape of the building. As a rule of GeoJson, the first and last coordinate pairs in a ring must be identical. The example of Hovland Hall shows that it has five coordinate pairs (with the first and last being identical), which make up one ring within one polygon.

Some buildings on campus have multiple rings (multiple arrays of coordinate pairs). A polygon with multiple rings represents buildings with holes in them, like Cordley Hall. In an array of rings, the first ring represents the exterior structure of a building while any additional rings are holes (interior rings). Moreover, GeoJson specifies the wrap direction of exterior and interior rings. Wrap direction is the direction that a ring is drawn when laying out each coordinate pair on a map in order. The wrap direction of exterior rings is counterclockwise while interior rings are wrapped clockwise. However, it’s worth noting that services like the Google Maps Polygon API only care that exterior and interior rings have opposite wrap direction.

Donut with labels showing the difference between an exterior and interior ring.
Buildings with holes in them are like donuts, where the interior ring represents the hole in the middle. Image Source.
Donut with two holes representing a polygon with two interior rings.
Buildings can have multiple interior rings which represent multiple holes. Image Source.

Since multipolygon locations are locations with multiple structures, their coordinates array adds another dimension to represent an array of polygons. All the same rules apply, except the coordinates array for a multipolygon will be 4 dimensional.

Do you have any ideas for data to add to the locations API? Contact us to share your ideas or visit our developer portal to register an application to try using the locations API: developer.oregonstate.edu

This year our team participated in the second annual Hackathon hosted by the Information Services department. Teams were given around 7 hours to create something before presenting their creations to all the participants and being judged on their work. Awards are given out at the end for categories like simplification, partnership, and learner experience.

Our team set out to create some custom skills for Amazon Alexa – Amazon’s virtual assistant voice service. We wanted Alexa to be able to answer questions about OSU. Our team decided to use the APIs we’ve built as the data source for some of the answers we wanted from Alexa. As apart of our project, we also had to create a new API that would function as anJared presenting at the hackathon intermediary between the Alexa voice service and our APIs that would be providing the data. Amazon allows to either use an AWS Lambda function or HTTPS endpoint to facilitate the interaction between the Alexa service and a backend data source.

Since we opted for the HTTPS option, we had to build our API around the specific JSON schema that Alexa sends and expects to receive. Amazon provides the Alexa Skills Kit to allow developers to create a skill that has a number of intents. A skill always has an invocation name that allows the Alexa to know what skill a person is wanting to use. We decided to use “Benny” as the invocation name for our skill since the questions that Alexa would answer would all be related to OSU. Intents are the types of actions that can be performed within a skill. To trigger an intent we created, we would start by saying “Alexa, ask Benny…”. When an intent is triggered, Alexa sends a request the Alexa API we created during the hackathon. Depending on the intent, our API will call one of our backend APIs to get the data for a response. The API uses the data to create a text response that’s meant to be spoken and returns the response to the Alexa.

Jose working at the hackathonWe used the locations API for several of the intents we created. The data in the locations API allowed us to create intents to answer questions like “what restaurants are open right now?”, “is the library open today?”, and “what resturants are close to me?”.

We used the directory API to create an intent to lookup information about people on campus. We can ask things like “what is the email address for Edward Ray?” and “what is the phone number for Wayne Tinkle?”.

Our team also created intents that used our terms API and class search API. For example, to get a list of open terms, you’d say “Alexa, ask Benny what terms can I register for?”. We also created the PAC (physical activity course) intent. When I was a student, I would often find myself looking for a random 1-2 credit class to take that fit around the rest of my schedule. The PAC classes were nice because I could do fun things like biking, running, or rock climbing. The PAC intent allows you to ask “give me a PAC class for Fall 2017 at 2:00 PM on Mondays”. Alexa will then find a random PAC class that fits into that schedule.

After the hackathon, we created a video to demo some of the intents we created with an Amazon Echo. However, you don’t need an Amazon Echo to develop and test Alexa skills. There are many applications out there that allow you to test an Alexa skill, like EchoSim.

Video Demo: https://media.oregonstate.edu/media/t/0_vqlnak06

Amazon let’s someone beta test any skill they create by linking an Alexa enabled device (like the Echo or EchoSim) to their account. Releasing a skill to be available to any Alexa device requires approval from Amazon. Since the skill we created the hackathon was a proof of concept, we didn’t submit the skill to be available on all Alexa devices, therefore the skill isn’t available to be used publicly.

Centralizing Access Token Requests

The current method to get an access token for an our APIs is to make a POST request containing a client ID and client secret to an API by appending “/token” to the end of the URL. For example, the first URL makes an access token request, and the second url makes an API request to the locations API:
  • POST https://api.oregonstate.edu/v1/locations/token
  • GET https://api.oregonstate.edu/v1/locations
Today, we are announcing the OAuth2 API, which performs OAuth2 related requests and serves as a centralized OAuth2 API. Developers can use the OAuth2 API to request an access token.
  • POST https://api.oregonstate.edu/oauth2/token
The token endpoint for the OAuth2 API allows access token requests for any API. Developers can then use the same access token in the Authorization header of their API request like normal.


Today, we are also deprecating the decentralized “/token” endpoints for our APIs. We plan to remove token endpoints from our APIs on Monday, November 13th 2017. We encourage you to start using the OAuth2 API instead for access token requests. Before the production change on November 13th, we’ll be removing the decentralized token endpoints from our APIs in our development environment on October 30th 2017. 
After Monday, November 13th, 2017, you won’t be able to get an access token by adding “/token” to the end of a request URL. For example, these requests won’t work after that day:
  • POST https://api.oregonstate.edu/v1/directory/token
  • POST https://api.oregonstate.edu/v1/locations/token
Instead, please use the OAuth2 API to get an access token. Link to documentation. 


Oregon State University uses OAuth 2 for API authentication and authorization. When someone registers an application on our developer portal, they get a client ID and client secret which are used during the API request process. To access an API resource, the client ID and secret are used in a token request to the OAuth2 API: POST https://api.oregonstate.edu/oauth2/token

The response for a token request will include an access token, which is used to get access to an API and has a limited lifetime. The response will also include a token expiration time and a list of APIs the access token may be used with. A developer can then use the access token in the header of a request to access an API the token is authorized for. This process works well for public data (like the locations or directory APIs) or when only specific people/departments can use an API.

Three-legged OAuth

Deprecating our decentralized token endpoints from our APIs allows us to direct all access token requests to one API instead of each individual one. This makes things simpler, but also allows us to expand our scope of OAuth2 to more than access token requests. One of the components of OAuth is the three-legged flow which allows an end-user to grant an application permission to access certain data about the user. For example, think about how applications on the web share data with each other. Let’s say a developer created a web form and allows a user to auto-fill information from their Facebook profile. The web form directs the user to Facebook to authorize the web form application to access the user’s data. This is an example of three-legged OAuth.

Enabling three-legged OAuth allows us to expand our scope when developing APIs to deal with more confidential or sensitive data, and lets the users decide on whether an application should access data about them. As an example, think about an API that could retrieve a student’s grades. The developer or the student (user in this example) shouldn’t have access to everyone’s grades. They should only be able to access their own. A student would log in (authenticate) before deciding if the application is allowed to retrieve their grades.

For more information on the OAuth standard, go to https://oauth.net/2/

Register an application on the developer portal to get started using some of OSU’s APIs: https://developer.oregonstate.edu

I’ve been working in IT at OSU as a student for the past 3 years, but more recently, I’ve been taking more of the responsibilities of a developer over the course of this year. Growing into the role as a student developer with my job has been well-timed with my degree in Business Information Systems. My undergraduate studies this past academic year as a senior has involved more software architecture and development leading up to my graduation. The similarities between the work I’ve done for my job and for my degree have been complimentary, allowing me to share skills and techniques between the two disciplines.
Taking classes in a business environment has given me a different prospective on software development for my work. Information systems business classes, besides teaching programming, focused on making sure the outcome of software development is successful and addresses the needs of stakeholders. We were taught to focus on the problem trying to be solved, conceptualize a solution in a non-technical way to stakeholders, and develop measures of success to ensure the outcome isn’t a failure. These skills along with my experience as a developer guide some of the advice I have for students who want to be developers:
  • Be able to communicate non-technically when needed. Whether it be a supervisor, customer, or colleague in a different department. Being in a software development role means taking on the work that requires special skills and knowledge unique to you and your team. The ability to propose a better solution to a problem, explain an issue to a stakeholder, or even describing the work being done to someone who isn’t as technically proficient is key. I’ve always believed that being able to teach a topic or skill is a marker of proficiency in that area, and when it comes to software development and IT, being able to communicate something non-technically is a similar marker of proficiency. 
  • Remember the importance of soft skills like verbal communication, demo/presentation skills, and writing skills. According a survey conducting by Technical Councils of North America 70% of employers say soft skills are equally important as technical skills for success in a software development career. My experience in IT and software development has taught me the importance of these soft skills. It has always been beneficial for me to keep up with these skills through practice, whether it’s giving a demo at work or a presentation for a class.
  • Learn and practice technical skills through projects and practical experience. Learning technical skills is very important, but I would advise aspiring developers to practice and maintain their skills through methods that are demonstrable to employers. Being able to show a coding project or talk about projects accomplished during a job or internship might be required during the interview or application process. The knowledge of development skills serves as a foundation, but being able to demonstrate those skills is important for pursuing a career as a developer.
At the end of the day, good technical skills will be at the core of software development. However, getting in to software development as a career can be difficult without much prior experience. I believe demonstrating the skills above can show employers that someone is able to grow into a developer position to further diversify their technical and soft skills.

The ERP system that we use is Banner. There are some big architectural changes that we’ve been adopting. This translates into some changes on how we do development. We will try to summarize them in this post.

Revision Control

We were using SVN to manage code that our vendor provided. The code is now provided via git repositories! This meant that we needed to store these git repositories locally. Other units within Information Services in the past used GitLab ( https://about.gitlab.com ) . GitLab is a good open source solution, but our previous experience with it showed us that the monthly updated usually broke some aspect of functionality and the upgrades were painful due to rails / ruby version updates. We went with GitHub Enterprise ( https://enterprise.github.com/home ) . 

We talked to another couple of schools that used GitHub Enterprise (for other types of development) and they were very happy with the UI and maintenance. Our experience with upgrades to GitHub enterprise have been painless. Features and changes to the GitHub platform first appear in public github.com and after a couple of months we’ll see them come to an upgrade in GitHub Enterprise. 

The change of svn to git is not an easy change for our developers. The change from a centralized server such as svn where we mostly used TortoiseSVN as a client to a decentralized system such as git is a big change. We tried using Code School to provide devs with access to great git tutorials. The issue was that due to work related commitments and devs not actively using git, the training wasn’t effective. Our next plan of action is to have group git exercises in a frequent basis. This will expose developers to using git more frequently and use the new scm features that a decentralized system such as git provides. For GUI clients we recommend using SourceTree ( https://www.sourcetreeapp.com ) and within Linux we use the command line since that’s easier.

Development Environment

Our developers use Windows for development. When we started to experiment with Banner XE development, we ran into roadblocks. Having to manage different versions of Java or Grails in Windows is not easy. In *nix environments, we have tools such as SDKMAN ( http://sdkman.io ) to manage the versions of JVM tools. It took us days to get Grails applications and their dependencies properly running in Windows. That’s when we realized that Windows could work, but trying to replicate this setup in each developer workstation was going to be a nightmare. 

To make lives easier we used VirtualBox ( https://www.virtualbox.org ) and created a VM using Ubuntu. In this VM, we setup the various Java and Grails dependencies to build Grails Banner applications. By having a single VM, we could setup the dependencies and configure the JVM tools once and all the developers could leverage a single installation. We had a Google doc that specified how to setup and configure the dependencies. It was several pages long and mostly due to screenshots in Windows. On the other hand, the Linux instructions were much shorter and could be scripted.

Using VMs was great, but we started to run into some memory problems. Our development machines were about 4-5 years old with 16GB of RAM and non-ssd hard drives. The grails applications that we are dealing with easily require 3-4GB of RAM each. We also use Intellij IDEA Ultimate within VirtualBox. We found that Intellij IDEA worked properly within VirtualBox. To be on the safe side, we give a total of 6G of ram to VirtualBox. This meant that developers couldn’t run many applications in their machines. Web browsers use a lot of memory and trying to run two VirtualBox VMs concurrently wasn’t possible.

To support development, developers got new machines. The new workstations have 32G of ram and an SSD. A new CPU and GPU were also part of the upgrade, but we didn’t find them to be a constraint with the previous machines. The SSD provided some improvements for Windows, but once VirtualBox is running the SSD doesn’t make a huge difference. I would still recommend to others to upgrade to a new SSD and RAM if possible. It is a very cheap upgrade and you will run out of RAM.


In order to support the updated development workflow, we are using Jenkins ( https://jenkins.io/index.html ). The ESM tool provided by our ERP vendor leverages Jenkins as the underlying technology. From talking to our DBA, ESM only provides us with admin level access so it’s not something that we could easily allow devs to use, yet. We do have ESM installed, and our DBA uses it to fetch code from the vendor.

Since we have non grails based development it made sense for us to stand up a separate instance of Jenkins. We are leveraging Ansible ( https://www.ansible.com ) to setup the Jenkins, its dependencies and jobs. By using Ansible, we are able to test changes to Jenkins locally via Vagrant ( https://www.vagrantup.com ) in our workstations. We can go from a bare VM to start compiling an XE Grails application (using a jenkins job) within 5 – 7 minutes. 

Dev Practices - Ansible Automation


So far we haven’t made any custom changes to the codebase. 

  1. A Jenkins job checks daily if a new git repository is available from the ERP vendor. If there is a new one, we create a new repo in github enterprise
  2. Daily we update our GitHub Enterprise repos with any new commits, branches and tags from the vendor
Dev Practices - Vendor Code Upates

The development workflow that we plan to use is the following:

  1. Use three git branches that mirror our instances for each application. One environment is production and the other two are development / test.
  2. When a developer wants to make a change to either the development or test environment, they would create a feature branch. The feature branch’s name is a ticket-id (from our bug tracker) and two / three word description of the change. This allows us to track a change / back to a developer and why the change is being made.
  3. We want to commit early and often to the feature branches. Once the code is ready, we’ll use GitHub’s Pull Requests to do reviews. 
  4. Jenkins jobs can be leveraged to perform checks on the code.
  5. Once the code is ready to be deployed to either the test or dev environment, a Jenkins job will generate a war file for our DBA.
  6. Our DBA then uses ant scripts provided by our ERP vendor to deploy the WAR file to Tomcat.
Dev Practices - Developer Workflow

The advantages of using Jenkins to compile the code is that we make sure there are no local code changes or dependencies in a developer’s workstations needed to compile code and not present in GitHub Enterprise. All war files are compiled in the same manner (JVM version, dependencies, etc). Naming of artifacts is consistent and Jenkins can send notifications when builds fail. Developers can review previous builds, generate new war files, and examine failed builds.

Next Steps

Some of our next steps and improvement areas:

  • Right now we are using Google docs to document steps for developers. These need to be improved since at the time we were heavily using Windows for development.
  • If you are coming from SVN, moving to git is a big change. We plan to do git exercises as a team on a weekly basis. This will help get our developers familiar with git commands, merging, branches and dealing with git conflicts.
  • As we start to make changes to the vendor supplied code, we need to figure out what version of a grails application we are running. We need to keep track not just of the vendor’s base version, but our commits / changes. We’ll use jenkins to name war files in a way that includes the environment, vendor version and our local commit sha1.
  • Containers are the next major step for virtualization. We plan to explore how we can best run WAR file in containers for development purposes.

One of the most commonly requested public API by students has been a class/catalog API. The fact that there’s no API for class/catalog data hasn’t stopped eager student developers. They usually end up scraping data from the catalog and storing it in a database where their applications can easily access it. We are happy to say that we have started work on a solution to this common student developer problem 🙂

The API will allow developers to query classes by term, subject and course number to retrieve full class information including details, teacher, class availability and other information publicly available via the course catalog. We have started our design process in github: https://github.com/osu-mist/courses-api-design using the OpenAPI Initiative format (formerly known as Swagger). You can use the swagger editor links below to see the first draft of the design:

Let us know what you think either with a comment or pull request. This design and the first implementation of this API won’t be final. Our goal is to release a beta version of the API, and collect developer feedback.

Posted in API.

During our original exploration of APIs, we began by learning about the current space. We read books, watched webinars and used Gartner guides. One question that we couldn’t find an answer to, was what are other Universities doing with APIs? This question lead to the beginning of the API survey. The results below were shared with the ITANA mailing list in a raw format.

The list of questions was developed in collaboration with members from ITANA (a group of IT architects from Higher Ed). Due to the high number of questions, some questions were not included in the final version of the survey. In order to give more freedom to participants, the survey didn’t ask for emails or names. These results were shared with the API subgroup of ITANA via the mailing list. We are now putting the results in a blogpost to make it easier to discover in the future. If you have any questions about the survey format, design or questions, drop us a note in the comments section below.

1. What is the name of your higher education institution?
* University of Michigan
* Simon Fraser University
* University of Washington
* Virginia Tech
* The University of Toledo
* Northwestern University
* George Mason University
* Columbia University
* University of Michigan
* University of Toronto
* University Of Chicago
* Brigham Young University
* University of Wisconsin – Madison
* University of Michigan
* University of Chicago
* Johnson & Wales University
* Yale
* Minnesota State Colleges & Universities
* University of California at San Diego
* Oregon State University
* Yale University

2. What is the enrollment size of your higher education institution?

Minimum: 5,000
Max: 140,000
Average: 38,000

3. Is your higher education institution currently working on Web Services, Service Oriented Architecture?


4. What is the FTE size of the team?

2 central IT teams: 7 operating suite of REST Web Services; 10+ building new Enterprise Integration Platform (EIP)
2 FTE – infrastructure. Many FTE developers who work with web services and soa, probably over 20 known to me.

5. Is there one central department working on this effort or multiple departments?

A single central department7
Multiple departments8

6. Was this initiative setup from top management or by a small group(s) / department(s)?

A single central department7
Multiple departments8

7. What technologies are you using?

Enterprise Service Bus (ESB)7
API Gateway10

Other responses included:
* Services Registry, Message Broker
* Custom API’s, Vendor App API’s

8. What technologies are you trying to phase out, if any?
* migrating from custom .NET REST APIs to EIP with API Management
* point-to-point integrations, batch data data transfers, database links
* PeopleSoft customizations
* Direct DB connections
* Batch download
* Looking to replace custom API codebases with iPaaS solution
* custom middleware
* Hub-based Webmethods

9. Are these web services / APIs accessed by internal departments / groups within your higher ed institution, or external 3rd party vendors?

Internal departments15
3rd party vendors10

10. Where do you publish the list of web services / APIs available?
* Still in its infancy, but currently at http://www.sfu.ca/data-hub.html
* http://webservices.washington.edu/ [this is the current, custom Web Services Registry … will be migrated to an API Management tool]
* Intranet
* Services Registry
* Internal wiki
* API Manager Application
* The intent is to simply use a web page
* No central place yet. This would part of benefit of a new iPaaS that has solid API management.
* n/a
* API Manager
* https://developers.yale.edu/
* Currently don’t have a good inventory. Looking to publish a list using an API Management Service
* Planning to publish using api manager
* not published online yet

11. What’s the URL of your web services / API / SOA documentation?
* http://www.sfu.ca/data-hub/api.html
* http://webservices.washington.edu/ [also extensive UW-centric documentation in Confluence wiki sites]
* Intranet
* https://serviceregistry.northwestern.edu
* Not accessible outside
* developer.it.umich.edu
* not yet available
* No central place yet. This would part of benefit of a new iPaaS that has solid API management.
* n/a
* Not available yet
* https://developers.yale.edu/
* N/A
* Documentation is not published yet
* not published online yet

12. What is the development stack used for developing SOA / API / web services?

Java / jvm14

* and many others
* PeopleTools, WSO2 Data Services
* Custom PHP.
* PHP and Perl
* JavaScript, iOS, Android

13. What are the primary benefits you are seeing from your API strategy?
* None yet, as it’s still in its infancy. The goal is to open up SFU data and encourage developers to consume it. The classic example is the mobile app. There are currently several in the Apple app store that rely on screen scraping to get the job done. We’d like to see that go away and encourage good development by students. This could translate into a better reputation for the university as a leading edge institute.
* close relationship between data management initiatives/governance and our ROA (Resource-Oriented Architecture) Web Services has made the governance of Web Services easier than it might have been otherwise. Also, maintenance has been easier since the number of Web Services roughly equals the number of data domains (a handful) with several endpoints per services which roughly equate to primary data tables (e..g. student). No specialized development to deliver only certain data to certain clients. Biggest benefit may be the cumulative effect on IT culture, that developers now expect there to be APIs for data.
* Promote reuse, easier to maintain
* We are early in the process, but we are seeing some benefits in enabling consumption of identity data and in the integration of cloud-based systems with our on-premise systems.
* Ability of central IT to enable others to get done what they need to. Ability to swap in modern systems of record for legacy systems.
* Re-use of services. Changing integration patterns of copying data locally.
* We’ve switched the strategy from IBM’s MQ and SOAP to REST-style Web API’s secured with OAuth 2.0 access tokens, and have seen much improved interest from the developers in the Divisions. Two Divisions have started developing applications to use the services
* More modern and sustainable integrations. Data transparency and opportunities for distributed app development around the data.
* Lower cost of adoption for new customers. Centralized and consistent security model. Well defined data models have helped to define better APIs.
* Development of mobile applications
* Metrics on usage and types of applications using the data
* Hoping to solve integration challenges. Increased security versus direct database connections.
* Reusability, De-coupling database Discoverability
* Centralizing access to data. Having conversations with people to come up with a consensus to describe data models. Developing one location where developers on campus can go to request access to data and view documentation
* Normalized and consistent abstraction layer to institutional data.

14. What are the primary challenges you’ve seen and are running into with your SOA / API strategy?
* No budget.
* without a strong executive mandate (ala Bezos at Amazon), adoption velocity is slow, especially with established applications that already have privileged access to enterprise administrative data and don’t need to re-invest in a SOA approach. Most success with non-central IT where such privileges don’t exist and with disruptive forces such as new SAAS vendors where data is not easy to get without a SOA approach. Another challenge is the push-back from client developers on our purely RESTful strategy. They often want data preassembled from several REST resources and delivered via a single API call instead doing the assembly themselves. The new EIP will facilitate this requirement.
* Convince developers and show benefits to management
* The ability of the community to ramp up and develop the skill sets necessary to expose services and consume them. We are also having issues with the amount of time it takes for data stewards to approve requests to consume services.
* Unbundling and rebundling complex logic in a new way.
* Everyone wants to consume APIs, nobody wants to contribute.
* The problem with MQ and SOAP was the learning curve for the Divisional developers – they simply didn’t have the time to figure out the details. PHP integration with MQ proved to be a challenge too.
* Prioritization. Funding. Technical debt.
* Early adoption was slow. Skill sets required to be productive are hard to acquire, which in turn slows down the amount of time until a staff memeber can become productive. No centralized documentastion or API gateway for all services to be discovered.
* Resourcing, knowledge, disagreement over approach,
* Governance around data, security, org magement
* Service governance. Getting infrastructure in place.
* Using unproven technology Changing the mindset of people who might be used to doing things in certain way Security, specifically authorization
* There’s an education component of bringing people up to speed with APIs and how to use them. Some people don’t like change and feel that they have less control when they don’t have a local cache of the data.
* Adoption, documentation, technical ability

15. Would you describe your APIs as microservices?

No (explain below)6
I don’t know6

No, explained:
* primarily implemented GET functionality which is by nature pretty chunky. Our Web services provide data between apps but don’t encapsulate business functionality except in limited cases. True microservices architecture would require a complete rearchitecture that accounted for eventual consistency and allowed for states of data not currently allowed
* We’re starting to adopt the micorservice model, but at the moment we have a single “student record” service that returns 18 different entities.
* full fledged apis

16. If you have not yet started to work on SOA / API / web services, are you planning to do that in the future?

I don’t know1

Note: not all survey participants were presented with this question. Only the ones that previously answered no to question 3.

17. Number of calls per minute for most active web service / API
* N/A
* 100
* 20
* < 1
* 1000
* 200
* Too early in the process to tell.
* N/A
* 3k per minute
* ? – in thousands for the hospital
* NA/Don’t know
* N/A
* not live yet

* N/A
* 100
* 20
* < 1
* 1000
* 200
* Too early in the process to tell.
* N/A
* 3k per minute
* ? – in thousands for the hospital
* NA/Don’t know
* N/A
* not live yet

18. Number of web services / APIs available
* 5
* 12 each with several different resources
* 20
* 17
* 5-10
* 30 in our API Gateway
* currently 1 service – will be separated into between 10 to 16 micro services
* <10
* 24 APIs
* ~50
* 10
* Less than 12
* less than 15
* not live yet

19. Number of applications using these web services / APIs
* N/A
* 40-60
* 1
* 14
* 10-15
* 300+ Many are student applications
* Two planned for now.
* <10
* 83
* ?
* 10-20
* 5-10
* not live yet

20. Number of departments / organizations using these web services / APIs
* N/A
* 10-20
* 1
* 6
* 3
* Don’t Know
* Two planned for now
* Just internal to IT at this point.
* >25
* ?
* 10-20
* 6-7
* not live yet

21. How much advance notice before API / web service retirement do you provide to your users?
* We anticipate being able to give 1 year notice, but also plan to use API versioning to allow for multiple versions concurrently
* 6 months minimum
* 30 days
* We have allowed the provider to determine that, but our expectation is that it will be at least 18 months.
* NA
* We haven’t retired any services as yet, but we would be expected to provide as much notice as possible because Divisions may not have the resources available to change their consumers.
* I don’t know.
* 4 weeks for production, variable for test environment based on potential affects.
* Not at this level – more adhoc – looking at api manager to support
* Once a web service endpoint is published, it is very difficult to retire.
* N/A
* two years is what we plan to provide

22. What is the granularity of your API versioning?

Single object / resource (e.g.: example.com/api/students/v1/)1
Collection of objects / resources (e.g.: example.com/api/v1/sudents/)8
I don't know3

23. What versioning scheme do your APIs use?

URL: /api/v110
Query parameter: ?v=1.01
HTTP header3

24. From where do you serve query responses? (multiple choice)

Source database14
Intermediate data store / db6
Operational Data Store or Data Warehouse6

Other: code, LDAP

25. What data formats are used by your SOA / Web Services / API layer?


Other: xhtml

26. Which one of these hypermedia formats / types do you use?


27. How were the data models in your SOA / APIs (representation of data objects e.g.: course, student, event) defined?

Single department5
Group of departments6
Data governance5

* collaborative REST design sessions with as many stakeholders involved as possible
* We have to use existing system of record data models.
* Still need to define those

28. Are your data models a direct representation of your database tables / schema?

I don’t know0

29. Do you have a data governance initiative?

I don’t know0

32. Do you use any tools to automatically convert a db schema / db tables to web service, API or microservice?

I don’t know0

34. Does your higher ed institution have a development portal to onboard new developers, including: list of APIs, web services or misc. resources for developers?

I don’t know2

36. What type of authentication do you use (e.g., LDAP, SAML, social login,etc) with your development portal?

Social login0

* Basic Auth

37. Is the development portal open to any of the following? (multiple choice)

3rd party developers1

38. What software / technology do you use for the development portal?
* WSO2 API Gateway
* Jive
* WSO2
* WSO2 currently also looking at other solutions

39. Do you use an API Gateway / Management Layer?

I don’t know0

46. What types of authentication are required by your higher ed institution to make API / web services calls?
* Public ones: none but will move to access tokens. Private ones: OAuth (still in development)
* UW administered tokens, X509 certs
* ADFS, CAS, Basic Authentication
* set of credentials similar to NetID/password
* Basic Auth, OAuth, CAS, other
* Oauth2.0, WS-Security
* OAuth 2.0 Client Credentials flow – because the users don’t own the data, the University Registrar does. We will consider other flows when the user is actually a Resource Owner.
* API keys, BasicAuth
* WS-Security username token, sometimes client certificates
* jwt
* Username/password, firewall rules
* Oauth
* tokens

47. What types of authentication have you used in the past, that you phased out?
* client certs for very limited SOAP calls
* NA
* basic auth
* none
* n/a
* n/a
* N/A
* ldap
* none

48. Are the authentication tokens, api keys or other authentication methods specific to the application making the request or the user of the application?

Specific to the application9
Specific to the user of the application3
I don’t know0

49. What types of security policies do you have in place for making API / web services calls?
* None yet, but planning to move APIs behind the API gateway, use access tokens for all calls, and enforce throttling on public APIs and OAuth for private APIs
* use a home-grown permissions system ASTRA to manage what resources an application can access; applications are assigned roles just like people
* Applications must obtain claims before apps are authorized to use APIs
* In addition to application credentials, the ESB checks with the services registry if the application has approval to use the service. We check the IP addresses of external consumers.
* Depends, some are open, some are highly secured.
* The application making the calls must be assigned a UTORid (the primary identity credential used at UofT). This credential must be used to obtain an OAuth access token, which is then included with API call. There is a policy enforcement point that validates the access tokens. The user’s UTORid is included in the request so that the API container (WebSphere Application Server) can perform authorization.
* IP restricted when possible. Registered API key required.
* AuthNZ at the ESB level to determine who the requester is an dwhether or not they can call the method. At the functional web service layer, a lot of service providers will ask the question what data is returned in the request.
* Under development
* We are in the early stages of defining policies for next generation of APIs.
* It depends on the application
* none yet

50. Do you cache your API responses?

I don’t know1

52. How do you handle communication with developers who rely on your APIs, web services regarding upcoming features?


* Yammer
* Personal interaction
* wiki pages

53. What support mechanisms are used by your higher ed institution to provide support of your web services?

Ticket system13

* UserVoice
* Personal interaction

54. What has been your strategy for moving away from bulk data feeds?
* This is a struggle and one we hope to entice people away from with this new API service. That said, private data exchanges (e.g. between our ERP and our meta directory) likely won’t go through the API gateway, but will (we hope) still migrate to an ESB model and away from nightly data dumps.
* still evolving; in fact our new EIP will support bulk feeds as a standard interface for those clients that need/want them. What won’t be supported is direct connection to underlying schema
* We are investigating SOA in a small POC and from there hope to build consensus around moving forward with a larger deployment, institute a governance process, etc.
* Slowly moving away, provide education and training
* We have not settled on a single strategy. Right now, it is mostly in the form of encouragement. At this time, no pre-existing bulk data feeds have been converted. We have architected services into new projects where we would have used bulk data feeds in the past.
* None currently. We have had individual efforts where such a change was suggested or recommended, but the only data transmission method currently used is file transfers.
* Developing ESB, SOA, APIs is the goal but we haven’t started yet. Hope to start a small pilot this coming year.
* Demands that are more real-time in nature have naturally moved us away, but we do still use them sometimes – in fact, we sometimes use API calls to produce the data feeds.
* Many bulk data feeds are used to replicate data into local databases. Local databases are prefered because they are thought to be the only way to provide reliable and timely data to local applications. We are trying to change this pattern by providing APIs.
* Initial approach was to replace batch downloads with real-time transactional messages. We found that the benefit was minimal until the academic or administrative process was changed to accommodate real-time transactions, then the benefit was substantial. However, very few Divisions are ready to change processes even if the benefits were obvious. It needs time for the administrators to think in real-time transactions rather than daily/weekly batch downloads.
* As opportunities arise in existing projects whose contraints allow it, vs. an initiative in of itself.
* We have been suggesting that our high volume bulk curricular data applications use a new service we have for delivering roster changes to JMS queues in an asynchronous manner. We are also delivering HR data to our UW System customers via our centralized HR system to UW System HR systems for local provisioning. We still have some large pull customers that use SOAP services to refresh their local databases, but work is underway to enhance what we can deliver asychrously
* Replace with services
* Still in the early stages
* Show the value of real time data and/or events
* Start by tackling only new work and integrations. Add desirable features to APIs.

56. How was the communication to the developers handled throughout the migration away from bulk data feeds?
* We’ll see.
* hasn’t happened yet
* We’ve not gotten that far yet.
* Provide education and training
* N/a
* Through architecture engagements with individual projects.
* The problem was we started with the developers – we should have started with the senior administrators by describing the benefits to their processes and increase in data currency…and not mention technology at all!
* Ad hoc, project by project.
* Email and meetings
* This is still work in progress.
* N/A
* Since API initiative is very new, we are hoping to have results available in next 3-4 years
* not applicable yet.

57. What are the top 3 SOA / API problems you’re trying to solve?
* – Get an operational gateway in service – Retire SOAP and associated legacy support applications – Move to real-time exchange of information between systems
* Top 7: integration with metadata management strategy and tools; granular data element level security; increased velocity building new APIs; better managed application management; cross-domain APIs; highly-performant search; produce more events for client apps
* 1. How to do it (Authentication, stumbling blocks, good default design patterns to champion). 2. How to illustrate the value of this effort to upper management to allocate funds. 3. How to navigate the political waters
* Easier access management; Reduce development complexity; Fit for cloud and mobile first strategy
* (1) Adoption (2) Slowness in data steward approvals (3) Strengthening security
* Getting interest/ buy-in from the web/portal team Training/obtaining staff with the skills to facilitate this move – Analysts and developers with actual API and SOA experience Lack of understanding/ prioritization from IT executives around this importance of this architectural change.
* 3) simplification/feed elimination 2) timely data 1) flexibility for apps mash-ups, mobile, etc.
* Improve speed of innovation Migrate away from legacy systems Improve user experience
* Getting enough re-usable content in our API Directory Providing training to developers Changing the culture to make exposing data to our community of developers an important deliverable for projects
* secure real-time access to student data in the System of Record, rather than stale local copies near real-time synchronization of identity management repositories –
* Exposure or data to innovators around campus. More modern, real-time, sustainable integrations. Overall efficiency, maximizing reuse instead of duplicated bulk jobs.
* API management Continuous integration within our SOA environments Oauth or Identity Server provided credentials for client side javascript that would access our servers from client side Javascript.
* Improve integration with campus and 3rd party systems More secure access to data by campuses and 3rd party systems Promote re-use of business functions Promote innovative uses of
* Authorization Canonical data model API Governance
* RFP for API Gateway Automation Developing APIs that make business sense

58. What’s your higher ed institution’s 5-year plan for SOA / API / Web Services?
* Get a production service rolled out that is actively used by the university community. Hopefully secure budget to manage it one day.
* Build an Enterprise Integration Platform to replace ODS and to feed EDW and migrate existing Web Services to to be hosted from EIP; Govern EIP holistically as part of larger data management initiative
* To say we have a 5-year plan for Web Services is to give us more credit for planning than I believe we deserve at the moment. Right now we’re trying to build momentum and doing so in a guerilla fashion as part of smaller projects in the hopes we can scale this effort in the future.
* Moving away from batch processes to ESB and service automation
* More adoption
* Last time I spoke with the teams about it, no plan existed other than ad-hoc development as required by selected projects. And those projects are generally rejected due to lack of supportability.
* Not there yet.
* Don’t Know
* We’re waiting to see what our experience is with our current approach (REST/OAuth 2.0) and if successful, focus on API management (we’ll probably buy a product).
* I don’t know.
* Build an Integration Center of Excellence for our campus.
* Still in planning and proof-of-concept phase. Evaluating cloud API management solutions.
* Have around 100+ apis
* Start by centrally providing APIs Develop a foundation such as: docs, testing, infrastructure that others can leverage Get other departments on campus to develop APIs using best practices provided centrally In 3 years, we’d like APIs to be the main method of transferring data between departments.

59. What are your areas and topics of interest for SOA?
* API Gateway deployment, currently
* security metadata
* We would like to see it in action at a larger scale and the effect it has had on productivity. We would be interested in discussing architecture and infrastructure decisions to learn from those. At this time we are primarily leaning toward an ESB-based platform offering web services, primarily RESTful, as our approach.
* ERP system integration and mobile apps
* (1) OAuth 2 (2) Adoption strategies (3) API Management Tools (4) Approval workflows and automation of approval workflows (5) Strategies for working effectively with data stewards (6) Documentation tools
* Real time data exchange. Improved accuracy of data Improved security architecture
* API sharing for common things that we do in higher ed.
* Is there an opportunity to define a common API for all universities to implement? http://edutechnica.com/2015/06/09/flipping-the-model-the-campus-api/ (I’m george.kroner@umuc.edu)
* Governance, governance, governance.
* ESB to iPaaS change in the higher ed market. Empowering innovative developers with data. Establishment of shared APIs as a path to more sane, governed, transparent, sustainable integration landscape in IT.
* REST, APIs, Authentication, Authorization, Organizational
* Already listed on ITANA site
* developing APIs speeding up the process through automation

Posted in API.

Varnish is a robust caching web service used by many high profile and high traffic websites. Acquia uses Varnish to help end users retrieve web sites faster and to help keep the load down on your servers. Once a page is in the cache performance will be fast, but what if you need to make a quick content adjustment? Now that the cached content is no longer up to date and needs to be cleaned, how can you keep your cache fresh?

What we did for our Drupal site is setup Cache Expiration with Purge modules to connect to our Varnish nodes and keep all our pages up to date. Most module like Varnish and Purge will integrate some page cleaning, but these modules don’t work for cleaning Views or Panels caches. What Cache Expiration offers is a more configurable options to act on and utilize the Workflow module to update our Views. With this extra control we were more comfortable to extend our Caching time beyond what Drupal allows to you select from the pull down to 7 days. With a quick config line change in sites.php we can setup the cache time

$conf['page_cache_maximum_age'] = 84600*7;

Now that we have our pages setting the cache header to 7 days, we need to setup one of the many different Purge options that Cache Expiration offers. After install Cache Expiration go to admin/config/system/expire and select External Expiration as we are wanting to connect to our external cache server. If you are going to use Acquia Purge or Varnish uncheck “Include Base URL in Expires”. The only sections I really needed to worry about were Node Expiration and Menu Links Expiration as we don’t use Comments and our User pages are not their Drupal user account pages. For files we use Apache to send file type headers for how long the cache server can hold onto those and all of the file uploads will append _[0-9] to the file name. For Node Expiration we set all three actions Insert, Update, Delete to trigger a page cache purge. Depending on your setup you might be fine with the basic Front Page and Node Page being purged but I wanted to do something al little different and did Custom with these two URLS


The next setting that we setup was the Menu Links Expiration section. Again all actions are checked. For menu Depth, it really depends on your menu structure and what menus you have content on that will be updated more frequently, for us Main Menu with depth of 1 was all I needed.

Definition of what Menu Depth is from the maintainer:

The goal is to easily arrange for a high visibility menu to be consistent and current on all the pages the menu links to. So, if you are using a menu block with a depth of 2, you can configure this plugin to clear the URLs linked to by said menu block.

Now that we have all our nodes setup to purge when content is change as well as our menus its time to move on to our Views. The way we setup our rules to purge our Views is One rule per View, so for Feature Story we have one rule that will purge the Home page and our All Stories page.

Create a new Rule and use two events as the trigger, After Saving New content and After Updating Existing Content now with these events we filtered them by a specific content type so that way the rule will only fire for this specific pice of content, for this example I will use Feature Story as the content type. We didn’t need any conditions so that was left as None. Finally down to the part that matters, the Actions. Add a new action Clear URL(s) from the Page Cache. As soon as you select this action the page will update and present you with a text box to enter in your different URL(S) you wish to purge. To continue with the Feature Story example, our site only had one URL that needed to be purge


Save and that’s it. Any time someone updates a Feature Story on your site, the Node Page and the View’s page will now be wiped from the Cache server and the next request to those pages will now cache the newest content.

For the home page I have a special rule to purge it from the cache anytime anything is updated on the site, you might want to make modifications to this to suite your needs.



Cache Expiration: https://www.drupal.org/project/expire

Purge: https://www.drupal.org/project/purge

Workflow: https://www.drupal.org/project/workflow

Varnish: https://www.drupal.org/project/varnish

Acquia Purge: https://www.drupal.org/project/acquia_purge

Web Application Programming Interfaces (APIs) allow us to share data between systems while preventing leaking of low level details that would otherwise cause tight-coupling between systems. These APIs are just like any application, with the small difference that they don’t have an end-user GUI. Instead, APIs focus on gathering data from backend(s) and performing operations on this data while providing a standard and consistent interface to these operations. APIs have the same need as regular applications when it comes to iterative planning/design and user feedback. This is where swagger (OpenAPI Specification) comes in.

The Swagger design language work started in 2010 as a framework to document and describe Web APIs. On January 1, 2016, it was renamed to OpenAPI Specification. This rename was part of converting the Swagger project into one of the Linux Foundation Collaboration projects, which have more involvement from vendors and community on direction of the toolset and the design language.

OpenAPI Specification allows us to use json or yaml to describe our web API endpoints (urls), their parameters, response body and error codes. Before the OpenAPI Specification existed people would use text files, word or other non-web API friendly formats to document their APIs.

When OSU began our API development efforts, we wanted to have a communication and feedback cycle with OSU developers. Using the OpenAPI Specification (swagger), we can use a tool such as the swagger editor (http://editor.swagger.io/#/) and make changes to the documentation of the API in real time while we talk to developers on campus. This allows us to make changes to the visible documentation of an API without having to implement it or spend a lot of time developing a separate structure or document.  We can make a change directly to the yaml file, which is faster than having to adjust already implemented APIs.

Information Services researched a variety of tools to describe APIs. We looked at: OpenAPI Specification, RAML (http://raml.org/) , API Blueprint (https://apiblueprint.org/) and I/O Docs (https://github.com/mashery/iodocs). At first, from a technical perspective, RAML was the most attractive design language when we compared it to OpenAPI Specification, but version 2 of OpenAPI specification addressed the v1 downsides. OpenAPI specification had the greatest user base with a huge community of developers online and along with that vendor support and OpenSource tools/frameworks that supported it.

The benefits of OpenAPI Specification are:

  • Online editor – provides a wysiwyg for the API. Easy to make changes and see the output.
    swagger editor
  • Mock server – you can describe your API and have a mock/test server endpoint that returns test data. This is helpful when testing APIs.
    server generation screenshot
  • Client code – sample code that can be used to test APIs and use the APIs in a variety of languages.
    client generation code
  • Vendor/OSS support – a variety of open source tools, frameworks and vendor offerings that work with OpenAPI Specification made it the de facto language to document APIs.

Our API development cycle is:

  1. Talk to stake-holders and data owners.
  2. Design API (using OpenAPI Specification).
  3. Collect Feedback.
  4. Implement.
  5. Release as Beta & collect feedback.
  6. Release to Production.
  7. Go back to first step.

These steps are similar to the application development cycle. The key component of our API development is listening to our community of developers. The APIs are built for developers and using OpenAPI Specification to design the API initially with the developers in mind allows us to collect feedback right away and early-on. Before we start implementation of an API, we have a really good idea of what the developers need and the design has been validated by API consumers (OSU developers), stake-holders and data owners.

Our API source code is hosted in github and the OpenAPI Specification is treated just like code and is included along with the source code. We treat this documentation just like we do code and documentation. The API Gateway that we use (apigee.com) allows us to upload our OpenAPI Specification yaml file and it creates the documentation pages needed for our APIs. This process streamlines our documentation while also preventing us from locking ourselves to a single vendor. If, down the road, we need to move to another API gateway solution, we will be able to re-use our OpenAPI Specification yaml files to create and document our APIs.

OpenAPI specification has been quick for our team to learn. Our students are able to pick it up after a few hours. Starting from a sample file it is easy to modify it to document new APIs. Once a person has experience with OpenAPI Specification, in less than 30 minutes we can have a design document that we can share with developers for feedback. This enables us to develop APIs faster and keep our developers happy. Faster development and happy developers? That’s a win.

Posted in API.