Expose REST APIs as GraphQL

IMPORTANT: The content on this page is written for version 0.6.0 of the fastly crate. If you have previously used this example, your project may be using an older SDK version. View the changelog to learn how to migrate your program.

Your infrastructure consists of one or more REST APIs and you want to expose a unified GraphQL endpoint to fetch and cache data for your next-generation applications.

Illustration of concept

WARNING: This information is part of a limited availability release. Portions of this API may be subject to changes and improvements over time. Fields marked deprecated may be removed in the future and their use is discouraged. For more information, see our product and feature lifecycle descriptions.

GraphQL is a typed query language for APIs that allows you to fetch data for your application with rich, descriptive queries. Your API defines a schema that can be used by clients to request exactly the data they need and nothing more, often in one single request to the API.

Compute@Edge allows you to respond to HTTP requests at the edge using a variety of programming languages that compile to WebAssembly. For the purposes of this solution, we will use Rust, as it has a rich ecosystem of libraries including the juniper GraphQL crate. This allows you to expose a GraphQL endpoint that could be fetching data from multiple backend systems, increasing the pace at which you can build new applications on top of your data stack.

On top of this, you can make use of the cache override interfaces in the Fastly Rust SDK to intelligently cache the responses from your backend APIs, reducing latency for your end-users and decreasing the load on your backend servers.

Instructions

IMPORTANT: This solution assumes that you have already have the Fastly CLI installed. If you are new to the platform, read our Getting Started guide.

Initialize a project

If you haven't already created a Rust-based Compute@Edge project, run fastly compute init in a new directory in your terminal and follow the prompts to provision a new service using the default Rust starter kit:

$ fastly compute init
Name: [graphql]
Description: A GraphQL processor at the edge
Author: My Name
Language:
[1] Rust
[2] AssemblyScript (beta)
Choose option: [1] 1
Starter kit:
[1] Default (https://github.com/fastly/compute-starter-kit-rust-default.git)
Choose option or type URL: [1]
Domain: [random-funky-words.edgecompute.app]
Backend (originless, hostname or IP address): [originless]

Install dependencies

The Rust ecosystem makes available several libraries to parse GraphQL queries, including the juniper crate which you will use for this solution.

Add this to your project's Cargo.toml file, optionally disabling the default-features as they are not required for this solution.

Cargo.toml
TOML
juniper = { version = "0.14.2", default-features = false }

Juniper will take care of parsing the inbound GraphQL queries, and Fastly can then make the necessary requests to your backend REST API.

When the responses come back, you'll need something to parse those so that they can be presented to the user as a GraphQL response. serde is a Rust crate that provides (de)serialization APIs for common formats. To parse the backend responses' JSON bodies, you will use the serde_json crate.

This solution also uses serde_json to encode the outgoing JSON responses to GraphQL requests.

HINT: If your backends respond with XML, you could adapt the code in this solution to use the serde-xml-rs crate.

Add these dependencies to your Cargo.toml file:

Cargo.toml
TOML
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"

Using the mock backend

To allow you to build this solution without creating your own REST API backend, we've made a mock REST API (using Compute@Edge) that you're welcome to use. Our mock API has two endpoints:

  • GET /users/:id - Retrieve a user
  • GET /products/:id - Retrieve a product

Your new GraphQL endpoint can unify these calls so a client application can get both of these types with a single request.

Responses from the mock backend are always JSON objects with an "ok" boolean and a "data" payload. A request to the GET /users/:id endpoint would result in a response like this:

GET /users/123
HTTP
{
"ok": true,
"data": {
"id": "123",
"name": "Test Bot",
"email": "me@example.com"
}
}

Make sure to add this backend to your Fastly service, which you can do in either one of the following ways:

  • On manage.fastly.com: Connecting to origins

  • Using the Fastly CLI:

    $ fastly backend create --name=api_backend --address=mock.edgecompute.app --service-id=<service> --version=<version>

Define data types

So how can you model the APIs data types in Rust when the response follows a predictable format? You can build a BackendResponse type, which uses a generic type parameter <T> to allow the encapsulation of both of the documents, removing the need to duplicate the ok and data parameters when adding more types.

You need to be able to deserialize these types from JSON. By adding the Deserialize implementation from serde, you will be able to build these types from the responses you get from the backend.

To define these response, user, and product types, add the following definitions to src/main.rs:

HINT: If you're starting from scratch, feel free to uncollapse and copy the entire code sample as a replacement for the default main.rs file.

src/main.rs
Rust
#[derive(Deserialize)]
struct BackendResponse<T> {
ok: bool,
data: T,
}
#[derive(Deserialize)]
struct User {
/// User ID
id: String,
/// Metadata
name: String,
email: String,
}
#[derive(Deserialize)]
struct Product {
/// Product ID
id: String,
/// Metadata
name: String,
year: i32,
color: String,
}

Make requests to the backend

Now you can define an ApiClient type to handle making queries to the backend API:

HINT: If you had multiple backends, you could adapt this code to use the correct backend for each query, and introduce new backend response types if needed.

src/main.rs
Rust
const BACKEND: &str = "api_backend";
const BACKEND_URL: &str = "https://mock.edgecompute.app";
/// The default TTL for requests.
const TTL: u32 = 60;
struct ApiClient;
impl ApiClient {
pub fn new() -> ApiClient {
ApiClient {}
}
/// Get a user, given their ID.
pub fn get_user(&self, id: String) -> Result<User, Error> {
let req = Request::get(format!("{}/users/{}", BACKEND_URL, id)).with_pass(true);
let mut resp = req.send(BACKEND)?;
// Read the response body into a BackendResponse
let response: BackendResponse<User> = resp.take_body_json()?;
Ok(response.data)
}
/// Get a product, given its ID.
pub fn get_product(&self, id: String) -> Result<Product, Error> {
let req = Request::get(format!("{}/products/{}", BACKEND_URL, id)).with_ttl(TTL);
let mut resp = req.send(BACKEND)?;
// Read the response body into a BackendResponse
let response: BackendResponse<Product> = resp.take_body_json()?;
Ok(response.data)
}
}

This is great! You now have an API client that is aware of the shape of your backend data types, and you can invoke it like this:

src/main.rs
Rust
const BACKEND: &str = "api_backend";
const BACKEND_URL: &str = "https://mock.edgecompute.app";
/// The default TTL for requests.
const TTL: u32 = 60;
struct ApiClient;
impl ApiClient {
pub fn new() -> ApiClient {
ApiClient {}
}
/// Get a user, given their ID.
pub fn get_user(&self, id: String) -> Result<User, Error> {
let req = Request::get(format!("{}/users/{}", BACKEND_URL, id)).with_pass(true);
let mut resp = req.send(BACKEND)?;
// Read the response body into a BackendResponse
let response: BackendResponse<User> = resp.take_body_json()?;
Ok(response.data)
}
/// Get a product, given its ID.
pub fn get_product(&self, id: String) -> Result<Product, Error> {
let req = Request::get(format!("{}/products/{}", BACKEND_URL, id)).with_ttl(TTL);
let mut resp = req.send(BACKEND)?;
// Read the response body into a BackendResponse
let response: BackendResponse<Product> = resp.take_body_json()?;
Ok(response.data)
}
}
#[allow(unused_mut)]
#[fastly::main]
fn main(mut req: Request) -> Result<Response, Error> {
let backend = ApiClient::new();
let product = backend.get_product("abcdef".to_string())?;
Ok(Response::from_body(product.name))
}

Build the GraphQL schema

Now we can introduce the juniper crate, which will build your GraphQL schema and call into your ApiClient to fulfil client requests.

First, you need a root query type for the queries to use. This contains the logic that will run to handle an incoming GraphQL query. Your implementation will pass the request on to the API client you built earlier.

You also need to annotate your types with GraphQLObject, and change the query response types to FieldResult from juniper. This allows the crate to derive a GraphQL schema from our Rust types:

src/main.rs
Rust
// Add Query type using the ApiClient as the request context
struct Query;
#[juniper::object(Context = ApiClient)]
impl Query {
fn user(&self, id: String, context: &ApiClient) -> FieldResult<User> {
context.get_user(id)
}
fn product(&self, id: String, context: &ApiClient) -> FieldResult<Product> {
context.get_product(id)
}
}
// Add juniper::GraphQLObject here
#[derive(Deserialize, GraphQLObject)]
struct User {
// And here
#[derive(Deserialize, GraphQLObject)]
struct Product {
// Implement juniper context so the backend is available for requests
impl juniper::Context for ApiClient {}

Expose the GraphQL endpoint

We now have a GraphQL schema that juniper can work with, so let's work on the main function and have it handle requests to the POST /graphql endpoint:

src/main.rs
Rust
#[fastly::main]
fn main(mut req: Request) -> Result<Response, Error> {
// Dispatch the request based on the method and path.
// The GraphQL API itself is at /graphql. All other paths return 404s.
let resp: Response = match (req.get_method(), req.get_path()) {
(&Method::POST, "/graphql") => {
// Instantiate the GraphQL schema
let root_node = RootNode::new(Query, EmptyMutation::<ApiClient>::new());
// Add context to be used by the GraphQL resolver functions,
// in this case a wrapper for a Fastly backend.
let ctx = ApiClient::new();
// Deserialize the post body into a GraphQL request
let graphql_request: GraphQLRequest<DefaultScalarValue> = req.take_body_json()?;
// Execute the request, serialize the response to JSON, and return it
let res = graphql_request.execute(&root_node, &ctx);
Response::new().with_body_json(&res)?
}
_ => Response::from_body("404 Not Found").with_status(404),
};
Ok(resp)
}

Congratulations! You now have a working GraphQL endpoint running at the edge. If you haven't yet, run the following commands to build deploy your service to the edge:

$ fastly compute build
✓ Initializing...
✓ Verifying package manifest...
✓ Verifying local rust toolchain...
✓ Building package using rust toolchain...
✓ Creating package archive...
SUCCESS: Built rust package GraphQL (pkg/GraphQL.tar.gz)
$ fastly compute deploy
✓ Initializing...
✓ Reading package manifest...
✓ Fetching latest version...
✓ Validating package...
✓ Cloning latest version...
✓ Uploading package...
✓ Activating version...
✓ Updating package manifest...
Manage this service at:
https://manage.fastly.com/configure/services/33CwjWt29T8i4wuW9js1t7
View this service at:
https://graph.edgecompute.app
SUCCESS: Deployed package (service 33CwjWt29T8i4wuW9js1t7, version 66)

Serve GraphQL Playground

Wouldn't it be great if there was some easy way to visualize your new graph API? Let's expose GraphQL Playground, which is an in-browser IDE for working with GraphQL services. Helpfully, the source for the playground is built into the juniper crate. Let's import this now, and add a route handler for the root path to serve the GraphQL playground source:

src/main.rs
Rust
use juniper::http::playground::playground_source;
// Serve GraphQL playground
(&Method::GET, "/") => Response::from_body(playground_source("/graphql")),

Build and deploy your service again, and you should be presented with GraphQL Playground. Explore the schema and run some queries to see data from the backend API served and cached at the edge.

Next Steps

This solution shows how to use a singular backend for queries, but you could adapt this code to work with multiple backends. You could also perform validation of responses from your backends at the edge to improve the observability of your systems. See the logging section of the Rust SDK guide.