Warpgrapher
Warpgrapher is framework for developing graph-based API services. Describe the data model for which you want to run a web service. Wargrapher automatically generates a GraphQL schema from the data model, as well as a set of resolvers for basic reate, read, update, and delete (CRUD) operations on that data.
If you need more more sophisticated, custom queries and endpoints, you can supply your own custom resolvers. Warpgrapher will automatically generate the GraphQL configuration and invoke your custom resolvers when appropriate.
The project is currently in development. Prior to reaching v1.0.0:
- Minor versions represent breaking changes.
- Patch versions represent fixes and features.
- There are no deprecation warnings between releases.
For in-depth usage information, see the API Documentation.
To browse the source code or contribute, see the project's GitHub Repository.
Quickstart
This guide will walk through creating a brand new project using the Warpgrapher engine. The quickstart example will create a very simple service. It will store email addresses for users. Warpgrapher is far more capable, allowing storage and retrieval of complex data models with many relationships. But for now, to get started quickly, begin with as simple a data model as possible.
This quickstart assumes a working knowledge of Rust, GraphQL, and at least one graph database. For example, we don't cover creating a new Rust project using cargo init
.
Configuration
First, set up the Cargo.toml
file to import Warpgrapher as a dependency. There are crate features for each of the databases supported as a back-end. Use the gremlin
feature to support Gremlin-based databases such as Apache Tinkerpop and Azure CosmosDB. Use cypher
to support Cypher-based databases, such as AWS Neptune and Neo4J. This tutorial example uses Neo4J.
Cargo.toml
[dependencies]
warpgrapher = { version = "0.11.2", features = ["cypher"] }
The src/main.rs
file begins with a definition of the data model for the example:
static CONFIG: &str = "
version: 1
model:
- name: User
props:
- name: email
type: String
required: false
";
Configurations are written in YAML. Although this example uses a static string for convenience, configurations may be stored in standalone files, or assembled from multiple parts.
The example configuration illustrates several principles in configuring a Warpgrapher engine. The configuration format itself is versioned, for backward compatibility. The version: 1
line notes that this configuration uses version 1 of the configuration file format. Until Warpgrapher reaches version 1.0, breaking changes in the config file format are permitted. After 1.0, breaking changes will trigger an increment to the configuration version.
The configuration contains a model
object. The model is a list of types present in the data model. In this case, the data model has only a single type called User
. Type definitions contain one or more properties on the type, listed under props
. In this example, the props
list contains only one property, named email
. The email
property is of type String
.
Altogether, this configuration defines a very simple data model. That data model keeps records about users, and the one property tracked for users is their email address.
Source Code
Once the configuration describing the data model is in place, it takes relatively little code to get a Warpgrapher engine up and running, ready to handle all the basic CRUD operations for that data.
The example creates a request context for the engine. The request context does two things. First, it tells the engine which type of database endpoint to use, which is Neo4J in this case. Second, the context provides a way for systems built on Warpgrapher to pass application-specific data into the engine for later use by custom-written endpoints and resolvers. In this example, there's no such custom data, so the context is empty other than designating a DBEndpointType
of CypherEndpoint
.
struct AppRequestContext {}
impl RequestContext for AppRequestContext {
type DBEndpointType = CypherEndpoint;
fn new() -> AppRequestContext {
AppRequestContext {}
}
}
The Warpgrapher engine is asynchronous, so the main function is set up to be executed by Tokio in this example.
#[tokio::main]
async fn main() {
Warpgrapher is invoked to parse the configuration string created in CONFIG
above.
// parse warpgrapher config
let config = Configuration::try_from(CONFIG.to_string()).expect("Failed to parse CONFIG");
Next, the databse endpoint is configured using a set of environment variables. See below for the correct environment variables and values.
// define database endpoint
let db = CypherEndpoint::from_env()
.expect("Failed to parse cypher endpoint from environment")
.pool()
.await
.expect("Failed to create cypher database pool");
The configuration and database created above are passed to the Warpgrapher engine, as follows.
// create warpgrapher engine
let engine: Engine<AppRequestContext> = Engine::new(config, db)
.build()
.expect("Failed to build engine");
At this point, the Warpgrapher engine is created and ready to field queries. The remainder of the source code in the example file created a simple query to demonstrate that the query engine is functioning. It creates a sample GraphQL query, submits the query to the Warpgrapher engine, and then prints out the query results to stdout. In a realistic system, the Warpgrapher engine would be invoked from the handler function of an HTTP server.
// execute graphql mutation to create new user
let query = "
mutation {
UserCreate(input: {
email: \"a@b.com\"
}) {
id
email
}
}
"
.to_string();
let metadata = HashMap::new();
let result = engine.execute(query, None, metadata).await.unwrap();
// display result
println!("result: {:#?}", result);
Database
Configure database settings using the following environment variables:
export WG_CYPHER_HOST=127.0.0.1
export WG_CYPHER_PORT=7687
export WG_CYPHER_USER=neo4j
export WG_CYPHER_PASS=*MY-DB-PASSWORD*
Start a 4.1 Neo4j database:
docker run --rm -p 7687:7687 -e NEO4J_AUTH="${WG_CYPHER_USER}/${WG_CYPHER_PASS}" neo4j:4.4
Run
Run the example using cargo
as follows.
cargo run
The output from the example should look something like the following.
result: Object({
"data": Object({
"UserCreate": Object({
"id": String(
"7e1e3497-dcfd-4579-b690-86b110c8f96a",
),
"email": String(
"a@b.com",
),
}),
}),
})
The identifier will be a different UUID than the one shown above, of course.
Full Example Code
The full example source code listing is below:
src/main.rs
use std::collections::HashMap;
use std::convert::TryFrom;
use warpgrapher::engine::config::Configuration;
use warpgrapher::engine::context::RequestContext;
use warpgrapher::engine::database::cypher::CypherEndpoint;
use warpgrapher::engine::database::DatabaseEndpoint;
use warpgrapher::Engine;
static CONFIG: &str = "
version: 1
model:
- name: User
props:
- name: email
type: String
required: false
";
#[derive(Clone, Debug)]
struct AppRequestContext {}
impl RequestContext for AppRequestContext {
type DBEndpointType = CypherEndpoint;
fn new() -> AppRequestContext {
AppRequestContext {}
}
}
#[tokio::main]
async fn main() {
// parse warpgrapher config
let config = Configuration::try_from(CONFIG.to_string()).expect("Failed to parse CONFIG");
// define database endpoint
let db = CypherEndpoint::from_env()
.expect("Failed to parse cypher endpoint from environment")
.pool()
.await
.expect("Failed to create cypher database pool");
// create warpgrapher engine
let engine: Engine<AppRequestContext> = Engine::new(config, db)
.build()
.expect("Failed to build engine");
// execute graphql mutation to create new user
let query = "
mutation {
UserCreate(input: {
email: \"a@b.com\"
}) {
id
email
}
}
"
.to_string();
let metadata = HashMap::new();
let result = engine.execute(query, None, metadata).await.unwrap();
// display result
println!("result: {:#?}", result);
}
Server Integration
The Warpgrapher engine does not come with a bundled HTTP server. Instead, it can be integrated with the HTTP server framework of choice for a given application. Or, instead, it can be invoked in other ways, such as from a lambda function.
Actix Web Integration
A full example of integrating Warpgrapher with Actix Web is contained in the warpgrapher-actixweb respository on Github. A slightly simplified version of that project is reproduced with additional description below.
To integrate Warpgrapher with an Actix Web engine, include the following dependencies in the Cargo.toml
file.
Cargo.toml
[dependencies]
actix-http = "3.0.0-beta.5"
actix-web = "4.0.0-beta.6"
actix-cors = "0.6.0-beta.2"
serde = "1.0.135"
serde_json = "1.0.78"
warpgrapher = { version="0.10.4", features=["cypher"]}
The rest of the code necessary to accomplish the integration is contained within the single source code file below. First, a number of structs and functions are imported from the Actix and Warpgrapher crates.
src/main.rs
use actix_cors::Cors;
use actix_http::error::Error;
use actix_web::middleware::Logger;
use actix_web::web::{Data, Json};
use actix_web::{web, App, HttpResponse, HttpServer, Responder};
use serde::Deserialize;
use serde_json::Value;
use std::collections::HashMap;
use std::convert::TryFrom;
use std::fs::File;
use warpgrapher::engine::config::Configuration;
use warpgrapher::engine::context::RequestContext;
use warpgrapher::engine::database::cypher::CypherEndpoint;
use warpgrapher::engine::database::DatabaseEndpoint;
use warpgrapher::juniper::http::playground::playground_source;
use warpgrapher::Engine;
The AppData
struct, defined below, is used to pass application data created during setup into the web server. In the case of this integration, the application data that is passed into the web server is the Warpgrapher Engine
.
#[derive(Clone)]
struct AppData {
engine: Engine<Rctx>,
}
impl AppData {
fn new(engine: Engine<Rctx>) -> AppData {
AppData { engine }
}
}
Just like the Quickstart tutorial, this integration creates a RequestContext
that could be used to pass data into the Warpgrapher engine for custom resolvers or endpoints, but is left empty in this example. The Rctx
struct does contain on associated type, which selects Cypher as the database type to be used for this Warpgrapher engine.
#[derive(Clone, Debug)]
struct Rctx {}
impl RequestContext for Rctx {
type DBEndpointType = CypherEndpoint;
fn new() -> Self {
Rctx {}
}
}
Next, the integration includes a GraphqlRequest
that is used to deserialize queries coming from Actix Web and pass the query content to the Warpgrapher engine.
#[derive(Clone, Debug, Deserialize)]
struct GraphqlRequest {
pub query: String,
pub variables: Option<Value>,
}
The following function is the handler that takes requests from the Actix Web framework and passes it into the Warpgrapher engine. In short, it pulls the query and query variables from the Actix Web query and passes those as arguments to the Warpgrapher engine's execute
function. A successful response is passed back as an Ok
result. Errors are returned within an InternalServerError.
async fn graphql(data: Data<AppData>, req: Json<GraphqlRequest>) -> Result<HttpResponse, Error> {
let engine = &data.engine;
let metadata: HashMap<String, String> = HashMap::new();
let resp = engine
.execute(req.query.to_string(), req.variables.clone(), metadata)
.await;
match resp {
Ok(body) => Ok(HttpResponse::Ok()
.content_type("application/json")
.body(body.to_string())),
Err(e) => Ok(HttpResponse::InternalServerError()
.content_type("application/json")
.body(e.to_string())),
}
}
To make it easier to explore the schema generated by Warpgrapher, the integration example also includes a handler function that returns a GraphQL playground, as the /playground
path. The handler function is shown below.
async fn playground(_data: Data<AppData>) -> impl Responder {
let html = playground_source("/graphql", None);
HttpResponse::Ok()
.content_type("text/html; charset=utf-8")
.body(html)
}
The create_engine
function pulls data from environment variables to determine how to connect to a Cypher-based database. These are the same environment variables described in the Quickstart and the Neo4J section of the Databases book.
async fn create_engine(config: Configuration) -> Engine<Rctx> {
let db = CypherEndpoint::from_env()
.expect("Failed to parse endpoint from environment")
.pool()
.await
.expect("Failed to create db endpoint");
let engine: Engine<Rctx> = Engine::<Rctx>::new(config, db)
.build()
.expect("Failed to build engine");
engine
}
Lastly, the main function itself pulls all of the above elements together. It reads a configuration from a ./config.yaml
file and passes that to the function defined above to create an Warpgrapher Engine
. It packages the Warpgrapher engine into an AppData
struct to pass off to Actix Web and creates an HttpServer
to begin fielding requests. The GraphQL API is bound to the /graphql
path, and the playground is bound to the /playground
path.
#[actix_web::main]
async fn main() -> std::io::Result<()> {
let config_file = File::open("./config.yaml".to_string()).expect("Could not read file");
let config = Configuration::try_from(config_file).expect("Failed to parse config file");
let engine = create_engine(config.clone()).await;
let graphql_endpoint = "/graphql";
let playground_endpoint = "/playground";
let bind_addr = "0.0.0.0".to_string();
let bind_port = "5000".to_string();
let addr = format!("{}:{}", bind_addr, bind_port);
let app_data = AppData::new(engine);
println!("Starting server on {}", addr);
HttpServer::new(move || {
App::new()
.app_data(actix_web::web::Data::new(app_data.clone()))
.wrap(Logger::default())
.wrap(Cors::permissive())
.route(graphql_endpoint, web::post().to(graphql))
.route(playground_endpoint, web::get().to(playground))
})
.bind(&addr)
.expect("Failed to start server")
.run()
.await
}
To view or clone a full repository project with an Actix Web integration, visit the warpgrapher-actixweb repository on GitHub.
AWS Lambda
To integrate Warpgrapher with AWS lambda, begin by including the following dependencies in Cargo.toml
.
Cargo.toml
[dependencies]
lambda_runtime = "0.3.0"
serde = "1.0.57"
serde_json = "1.0.57"
serde_derive = "1.0.57"
tokio = { version="1.4.0", features=["rt-multi-thread", "macros"] }
warpgrapher = { version="0.10.4", features = ["gremlin"] }
In the main.rs
source file, include the following code to include structs and functions that are needed from dependencies.
use api_service::{create_app_engine, Error};
use lambda_runtime::handler_fn;
use serde_derive::Deserialize;
use serde_json::{json, Value};
use std::collections::HashMap;
use std::env::var_os;
use std::sync::Arc;
use warpgrapher::engine::database::gremlin::GremlinEndpoint;
use warpgrapher::engine::database::DatabaseEndpoint;
use warpgrapher::juniper::BoxFuture;
Next the lambda integration defines a GraphqlRequest
struct that is used to deserialize query strings and request variables from the lambda interface for passing to the Warpgrapher engine.
#[derive(Clone, Debug, Deserialize)]
pub struct GraphqlRequest {
pub query: String,
pub variables: Option<Value>,
}
The AwsLambdaProxyRequest
struct is used to deserialize requests incoming from AWS lambda. Within the body of the request is the content that will be deserialized into the GraphqlRequest
struct described above.
#[derive(Clone, Debug, Deserialize)]
pub struct AwsLambdaProxyRequest {
pub body: String,
#[serde(rename = "requestContext")]
pub request_context: AwsLambdaProxyRequestContext,
}
The aws_proxy_response
function below packages a result returned by a Warpgrapher engine's execute
function into a format that can be returned to the AWS lambda framework.
pub fn aws_proxy_response(body: serde_json::Value) -> Result<JSON, Error> {
Ok(json!({
"body": serde_json::to_string(&body)
.map_err(|e| Error::JsonSerializationError { source: e})?,
"headers": json!({}),
"isBase64Encoded": false,
"statusCode": 200
}))
}
The create_app_engine
function takes a database pool, Gremlin in this example, and returns a Warpgrapher Engine
that can be used to handle GraphQL queries.
static CONFIG: &str = " version: 1
model:
- name: User
props:
- name: email
type: String
";
// create config
let config = Configuration::try_from(CONFIG.to_string()).expect("Failed to parse CONFIG");
// create warpgrapher engine
let engine: Engine<Rctx> = Engine::<Rctx>::new(config, db).build()?;
Ok(engine)
}
The main
function ties the above elements together to process a GraphQL query when the lambda function is invoked. The function creates a database pool from environment variables, as described in the Databases section of the book. The main
function then uses the create_app_engine
function to create a Warpgrapher Engine
. A closure is defined that deserializes the request from the AWS lambda function and passes it to the Warpgrapher engine for execution using the execute
method. The results are packaged up for response using the aws_proxy_response
method. That handler closure is passed to the lambda runtime for invocation when requests need to be processed.
#[tokio::main]
async fn main() -> Result<(), Error> {
// define database endpoint
let endpoint = GremlinEndpoint::from_env()?;
let db = endpoint.pool().await?;
// create warpgrapher engine
let engine = Arc::new(create_app_engine(db).await?);
let func = handler_fn(
move |event: Value, _: lambda_runtime::Context| -> BoxFuture<Result<Value, Error>> {
let eng = engine.clone();
Box::pin(async move {
let engine = eng.clone();
// parse handler event as aws proxy request and extract graphql request
let proxy_request = serde_json::from_value(event).map_err(|e|
Error::JsonDeserializationError {
desc: "Failed to deserialize aws proxy request".to_string(),
source: e,
})?;
let gql_request = serde_json::from_str(&proxy_request.body).map_err(|e|
Error::JsonDeserializationError {
desc: "Failed to deserialize graphql request in body".to_string(),
source: e,
})?;
// execute request
let result = engine
.execute(
gql_request.query.to_string(),
gql_request.variables,
HashMap::new(),
)
.await?;
// format response for api-gateway proxy
aws_proxy_response(result)
.or_else(|e| aws_proxy_response(json!({ "errors": [format!("{}", e)] })))
})
},
);
lambda_runtime::run(func)
.await
.map_err(|_| Error::LambdaError {})?;
Ok(())
}
Introduction
Warpgrapher is published as a Rust crate. There are crate features for each of the databases supported as a back-end. For Gremlin-based databases such as Apache Tinkerpop and Azure CosmosDB, use the gremlin
feature.
[dependencies]
warpgrapher = { version = "0", features = ["gremlin"] }
For Cypher-based databases, such as AWS Neptune and Neo4j, use the cypher feature.
[dependencies]
warpgrapher = { version = "0", features = ["cypher"] }
The database features are not mutually exclusive, so building with both features enabled will not do any harm. However, only one database may be used for an instance of the Warpgrapher engine. Compiling with no database features selected will succeed, but the resulting engine will have sharply limited functionality, as it will have no ability to connect to a back-end storage mechanism.
Continue for a tutorial on using Warpgrapher to build a web service.
Databases
Warpgrapher translates GraphQL queries into CRUD operations against a back-end data store, based on a configuration specifying a data model. The tutorial will return to the topic of the configuration file soon, but the first step is configuring Warpgrapher to integrate with the back-end database. Without a graph database behind it, Warpgrapher's functionality is sharply limited.
Warppgrapher supports several database back-ends for graph data:
- Apache Tinkerpop
- AWS Neptune (Cypher variant)
- Azure Cosmos DB (Gremlin variant)
- Neo4J
It may be possible to use Warpgrapher with other graph databases. The list above is the set that the maintainers have used previosuly. Using each of the databases above requires selecting the appropriate crate feature and setting up environment variables to provide connection information to Warpgrapher, as described below.
Regardless of database, export an environment variable to control the size of the database connection pool:
export WG_POOL_SIZE=8
If the WG_POOL_SIZE
variable is not set, Warpgrapher defaults to a pool the same size as the
number of CPUs detected. If the number of CPUs cannot be detected, Warpgrapher defaults to a pool
of 8 connections.
Gremlin-Based Databases
For all gremlin-based databases, such as Apache Tinkerpop and Azure Cosmos DB the following environment variables control connection to the database.
- WG_GREMLIN_HOST is the host name for the database to which to connect.
- WG_GREMLIN_READ_REPICA provides a separate host name for read-only replica nodes, if being used for additional scalability. If not set, the read pool connects to the same host as the read/write connection pool.
- WG_GREMLIN_PORT provides the port to which Warpgrapher should connect.
- WG_GREMLIN_USER is the username to use to authenticate to the database, if required.
- WG_GREMLIN_PASS is the password to use to authenticate to the database, if required.
- WG_GREMLIN_USE_TLS is set to
true
if Warpgrapher should connect to the database over a TLS connection, andfalse
if not using TLS. Defaults totrue
. - WG_GREMLIN_VALIDATE_CERTS is set to
true
if Warpgrapher should validate the certificate used for a TLS connection, andfalse
. Defaults totrue
. Should only be set to false in non-production environments. - WG_GREMLIN_LONG_IDS is set to
true
if Warpgrapher should use long integers for vertex and edge identifiers. Iffalse
, Warpgrapher uses strings. Defaults tofalse
. Consult your graph database's documentation to determine what values are valid for identifiers. - WG_GREMLIN_SESSIONS is set to
true
if Warpgrapher mutations should be conducted within a single Gremlin session, which in some databases provides transactional semantics, andfalse
if sessions should not be used. Defaults tofalse
. - WG_GREMLIN_VERSION may be set to
1
,2
, or3
, to indicate the version of GraphSON serialization that should be used in communicating with the database. Defaults to3
.
Example configurations for supported databases are shown below. In many cases, some environment variables are omitted for each database where the defaults are correct.
Apache Tinkerpop
Add Warpgrapher to your project config with the gremlin feature enabled.
cargo.toml
[dependencies]
warpgrapher = { version = "0", features = ["gremlin"] }
Set up environment variables to contact your Gremlin-based DB:
export WG_GREMLIN_HOST=localhost
export WG_GREMLIN_PORT=8182
export WG_GREMLIN_USER=username
export WG_GREMLIN_PASS=password
export WG_GREMLIN_USE_TLS=true
export WG_GREMLIN_VALIDATE_CERTS=true
export WG_GREMLIN_LONG_IDS=true
The WG_GREMLIN_CERT
environment variable is true if Warpgrapher should ignore the validity of
certificates. This may be necessary in a development or test environment, but should always be set
to false in production.
If you do not already have a Gremlin-based database running, you can run one using Docker:
docker run -it --rm -p 8182:8182 tinkerpop/gremlin-server:latest
To use an interactive gremlin console to manually inspect test instances, run
docker build -t gremlin-console -f tests/fixtures/gremlin-console/Dockerfile tests/fixtures/gremlin-console
docker run -i --net=host --rm gremlin-console:latest
In the console, connect to the remote graph:
:remote connect tinkerpop.server conf/remote.yaml
:remote console
AWS Neptune
Add Warpgrapher to your project config:
cargo.toml
[dependencies]
warpgrapher = { version = "0", features = ["cypher"] }
Then set up environment variables to contact your Neptune DB:
export WG_CYPHER_HOST=127.0.0.1
export WG_CYPHER_READ_REPLICAS=127.0.0.1
export WG_CYPHER_PORT=7687
export WG_CYPHER_USER=
export WG_CYPHER_PASS=
Azure Cosmos DB
Add Warpgrapher to your project config:
cargo.toml
[dependencies]
warpgrapher = { version = "0", features = ["gremlin"] }
Then set up environment variables to contact your Cosmos DB:
export WG_GREMLIN_HOST=*MY-COSMOS-DB*.gremlin.cosmos.azure.com
export WG_GREMLIN_PORT=443
export WG_GREMLIN_USER=/dbs/*MY-COSMOS-DB*/colls/*MY-COSMOS-COLLECTION*
export WG_GREMLIN_PASS=*MY-COSMOS-KEY*
export WG_GREMLIN_USE_TLS=true
export WG_GREMLIN_VALIDATE_CERTS=true
export WG_GREMLIN_VERSION=1
Note that when setting up your Cosmos database, you must configure it to offer a Gremlin graph API.
Note also that Warpgrapher does not automate the setting or use of a partition key. You must select the node property you wish to use as a partition key and appropriately include it in queries. When Warpgrapher loads relationships and nodes to resolve the full shape of a GraphQL query, it will query by ID, which will likely result in cross-partition queries. This should be fine for many use cases. Extending Warpgrapher to allow more control over and use of partition keys for nested relationship resolution is future work.
Be advised that Gremlin traversals are not executed atomically within Cosmos DB. A traversal may fail part way through if, for example, one reaches the read unit capacity limit. See this article for details. The workaround proposed in the article helps, but even idempotent queries do not guarantee atomicity. Warpgrapher does not use idempotent queries with automated retries to overcome this shortcoming of Cosmos DB, so note that if using Cosmos, there is a risk that a failed query could leave partially applied results behind.
Neo4J
Add Warpgrapher to your project config.
[dependencies]
warpgrapher = { version = "0", features = ["cypher"] }
Then set up environment variables to contact your Neo4J DB.
export WG_CYPHER_HOST=127.0.0.1
export WG_CYPHER_READ_REPLICAS=127.0.0.1
export WG_CYPHER_PORT=7687
export WG_CYPHER_USER=neo4j
export WG_CYPHER_PASS=*MY-DB-PASSWORD*
Note that the WG_CYPHER_READ_REPLICAS
variable is optional. It is used for Neo4J cluster
configurations in which there are both read/write nodes and read-only replicas. If the
WG_CYPHER_READ_REPLICAS
variable is set, read-only queries will be directed to the read replicas,
whereas mutations will be sent to the instance(s) at WG_CYPHER_HOST
.
If you do not already have a Neo4J database running, you can run one using Docker:
docker run -e NEO4J_AUTH="${WG_CYPHER_USER}/${WG_CYPHER_PASS}" neo4j:4.4
Warpgrapher Config
The Quickstart demonstrated using a string constant to hold the Warpgrapher configuration. It is also possible to read the configuration from a YAML file or to build a configuration programmatically using the configuration module's API. The following three configurations are all equivalent.
String Configuration
The following is the string constant from the Quickstart.
static CONFIG: &str = "
version: 1
model:
- name: User
props:
- name: email
type: String
required: false
";
YAML File Configuration
The same configuration can be created in a YAML file, as follows.
config.yaml
version: 1
model:
User
- name: User
props:
- name: email
type: String
The configuration can then be loaded and used to set up a Configuration
struct.
main.rs
let config_file = File::open("config.yaml").expect("Could not read file");
let config = Configuration::try_from(config_file).expect("Failed to parse config file");
Programmatic Configuration
The code below shows the creation of the same configuration programmatically.
// build warpgrapher config
let config = Configuration::new(
1,
vec![Type::new(
"User".to_string(),
vec![Property::new(
"email".to_string(),
UsesFilter::all(),
"String".to_string(),
false,
false,
None,
None,
)],
Vec::new(),
EndpointsFilter::all(),
)],
vec![],
);
The programmatic version includes some function arguments that do not appear in the YAML versions of the configuration, because they take on default values when omitted from a YAML configuration. For example, the UsesFilter
on a property allows granular control over whether a property is included in create, read, update, and delete operations. This allows, among other things, the creation of read-only attributes. Similarly, the EndpointsFilter
determines whether the User
type has create, read, update, and delete operations exposed in the GraphQL schema. For example, if users are created by a separate account provisioning system, it might be desirable to filter out the create operation, so that the GraphQL schema doesn't allow the possibility of creating new users.
Types
The Quickstart presented a first example of a Warpgrapher configuration, shown again here.
static CONFIG: &str = "
version: 1
model:
- name: User
props:
- name: email
type: String
required: false
";
Type Configuration
Recall that the version
value is used to indicate the configuration file format version to be used. Right now, the only valid value is 1. The next element in the configuration a a data model. The model
object is a list of types. The example shown in the Quickstart uses many defaults for simplicity. The definition below shows the full range of options for property definitions. Don't worry about relationships between types for the moment. Those are covered in the next section.
model:
- name: String
props:
- name: String
uses:
create: Boolean
query: Boolean
update: Boolean
output: Boolean
type: String # Boolean | Float | ID | Int | String
required: Boolean
list: Boolean
resolver: String
validator: String
endpoints:
read: Boolean
create: Boolean
update: Boolean
delete: Boolean
Right under the model object is a list of types. The first attribute describing a type is a name. In the example from the Quickstart, the name of the type is User
.
The second attribute describing a type is props
. The props
attribute is a list of properties that are stored on nodes of that type. Each property is described the several configuration attributes, as follows.
The name
attribute is a string that identifies the property. It must be unique within the scope of the type. In the Quickstart example, the sole property on the User type is named email.
The uses
attribute is an object that contains four fields within it, create
, query
, update
, and output
, each a boolean value. The fields within the uses
attribute control whether the property is present in various parts of the GraphQL schema. If the create
attribute is true, then the property will be included in the GraphQL input for creation operations. If false, the property will be omitted from creation operations. If the query
attribute is true, the property will be included in the GraphQL schema for search query input. If false, the property will be omitted from search query operations. If the update
attribute is true, the property will be included in the GraphQL schema input for updating existing nodes. If false, the property will be omitted from the update schema. Lastly, if the output
attribute is true, the property will be included in the GraphQL schema for nodes returned to the client. If false, the property will be omitted from the output.
By default, all uses
boolean attributes are true, meaning that the property is included in all relevant areas of the GraphQL schema. Selectively setting some of the uses
attributes handles uses cases where a property should not be available for some operations. For example, one might set the create
attribute to false if a property is a calculated value that should never be set directly. One might set update
to false to make an attribute immutable -- for example, the email
property of the User
type might have update
set to false if GraphQL clients should not be able to tamper with the identities of users. One might set output
to false for properties that should never be read through the GraphQL interface, such as for keeping people from reading out a password property.
The type
attribute of the property definition is a String value that must take on a value of Boolean
, Float
, ID
, Int
, or String
, defining type of the property.
If the required
attribute of the property definition is false, the property is not required (it is optional). By default this attribute is true, which means it must be provided when nodes of this type are created (unless hidden from the create
use) and it must be present (non-null) when retrieving the node from Warpgrapher (again, unless hidden from the output
use).
If the list
attribute of the property definition is true, the property is a list of scalar values of type
. If list
is false, the property is only a single value of that scalar type.
The resolver
attribute is a text key that is used to identify a custom-written resolver function. Warpgrapher allows applications to define custom resolvers that do more or different things than the default CRUD operations automatically provided by Warpgrapher itself. For example, a custom resolver might dynamically calculate a value, such as a total or average, rather than just returning a value from the database. Custom resolvers for dynamic properties are covered in greater detail later in the book.
The validator
attribute is a text key that is used to identify a fuction that validates an input. For example, a validation function might check an email against and email validation regex. Validation functions are covered in greater detail later in the book.
Note that the endpoints
attribute is on the type
definition, not the property
definition, as indicated by the indentation in the YAML example above. The endpoints
attribute is somewhat similar to the uses
boolean, but at the level of the whole type rather than a single property. If the read
attribute is true, Warpgrapher will generate a query in the GraphQL schema so that node of this type can be retrieved. If false, no query will be generated. If the create
attribute is true, Warpgrapher will generate a node creation mutation in the GraphQL schema. If false, no creation mutation will be generated. If the update
attribute is true, Warpgrapher will generate a node update mutation in the GraphQL schema. If false, no update mutation will be generated. Lastly, if the delete
attribute is true, Warpgrapher will generate a node deletion mutation in the GraphQL schema. If false, no delete mutation will be generated.
Generated Schema
Warpgrapher uses the configuration described above to automatically generate a GraphQL schema and default resolver to create, read, update, and delete nodes of the types defined in the configuration's model section. The remainder of this section walks through the contents of the schema in detail.
The top level GraphQL Query has two queries within it, as shown below. The _version
query returns a scalar String
with the version of the GraphQL service. The value returned is set with the with_version method on the EngineBuilder
.
type Query {
User(input: UserQueryInput, options: UserOptions): [User!]
_version: String
}
The User
query, above, is generated by Warpgrapher for the retrieval of User nodes. The query takes two parameters, an input
parameter that provides any search parameters that narrow down the set of Users to be retrieved, and an options
object. The query returns a User
type.
The UserQueryInput
, defined in the schema snippet below, is use to provide search parameters to identify the User
nodes to return to the client. The User
node configuration had only one property, email
. Warpgrapher automatically adds an id
property that contains a unique identifier for nodes. In the GraphQL schema, the id is always represented as a string. However, in some Gremlin back-ends, the id may be required to be an integer, in which case the id field in the GraphQL schema will be a String that can be successfully parsed into an integer value.
input UserQueryInput {
email: StringQueryInput
id: StringQueryInput
}
Note that the types of both email
and id
are StringQueryInput
, not a simple String
scalar. This is because the query input allows for more than just an exact match.
input StringQueryInput {
CONTAINS: String
EQ: String
GT: String
GTE: String
IN: [String!]
LT: String
LTE: String
NOTCONTAINS: String
NOTEQ: String
NOTIN: [String!]
}
The StringQueryInput
has various options for matching a String more flexibly than an exact match. The CONTAINS
operator looks for the associated String value anywhere in the target property (e.g. the email
or id
properties of a User
node). EQ
looks for an exact match. GT
and GTE
are greater-than and great-than-or-equals, which are useful for searching for ranges based on alphabetization, as do LT
and LTE
. The IN
operators allows for searching for any string that is within a given set of Strings. NOTCONTAINS
is the opposite of CONTAINS
, looking for property values that do not contain the provided String. NOTEQ
looks for non-matching Strings. And finally, NOTIN
matches property values that do not appear in the provided set of Strings.
The options
argument, described back above as an argument for the User
query as a whole, is of type UserOptions
. The UserOptions
type has a single property, called sort
, which is a list of zero or more UserSort
objects. Each UserSort
object has two enumeration properties, direction
and orderBy
.
type UserOptions {
sort: [UserSort!]
}
type UserSort {
direction: DirectionEnum
orderBy: UserOrderByEnum!
}
enum DirectionEnum {
ascending
descending
}
enum UserOrderByEnum {
id
email
}
The UserOrderByEnum
has variant values for each of the properties (but not relationships) on a User. By including one or more values in the sort
array provided to UserOptions
, it is possible to sort results coming back from Warpgrapher. The direction
property determines whether the results are returned in ascending or descending sort order. The orderBy
field determines on which property the results are sorted. If the sort
array contains more than one value, then resorts groups of results with the same first sort key are further sorted by the second key, and so on. For example, a sort
array might have entries for joinDate
and then name
to sort first by the date someone joined, and alphabetically for all people who joined on the same date.
The results of the query are returned in a User
type, shown below.
type User { email: String id: ID! }
The `User` type is the definition of the output type for the `User` GraphQL query. The names are the same, but these are two distinct things in the GraphQL schema -- the `User` query returns an array of zero or more `User` types. The `User` type is two fields, and `id` and an `email`. The id is a unique identifier for that node, which may be an integer or a UUID, depending on the graph database used. The `email` string is the single property that was defined on the example schema.
type Mutation { UserCreate(input: UserCreateMutationInput!, options: UserOptions): User UserDelete(input: UserDeleteInput!, options: UserOptions): Int UserUpdate(input: UserUpdateInput!, options: UserOptions): [User!] }
In addition to providing queries to retrieve existing nodes, Warpgrapher also automatically generates GraphQL schema elements and resolvers for create, update, and delete operations. The schema snippet above shows the mutations that are generated for the `User` node in the example configuration. All three of the mutations take an `options` argument, which was described in the section on queries, above. Additionally, all three mutations take an `input` value, that provides the information necessary to complete the create, update, or delete operation, respectively. Creation operations return the created node. Update operations return all the nodes that were matched and updated. Lastly, the delete operation returns the number of nodes that were deleted. The input arguments are detailed below.
input UserCreateMutationInput { email: String id: ID }
The `UserCreateMutationInput` mutation input includes the email property defined in the example configuration. It also includes an `id` property. Note that the `id` property is optional. If not provided by the client, it will be set to a unique identifier by the Warpgrapher server. The reason that clients are permitted to set the `id` when creating nodes is to allow for offline mode support, which may require the creation of identifiers within local caches that should remain the same after synchronization with the server.
input UserDeleteInput { DELETE: UserDeleteMutationInput MATCH: UserQueryInput }
input UserDeleteMutationInput
The `UserDeleteInput` input is used to identify which nodes to delete. Note that the `MATCH` part of the argument is the very same `UserQueryInput` type used in the `User` query schema element above. So searching for which nodes to delete is the same input format used to search for nodes to return in a read query. The `UserDeleteMutationInput` is empty right now, and may be omitted. It will become relevant later, in the discussion on relationships between nodes.
input UserUpdateInput { MATCH: UserQueryInput SET: UserUpdateMutationInput }
input UserUpdateMutationInput { email: String }
Lastly, the `UserUpdateInput` input is provided to the udpate mutation in order to select the nodes that need to be updated and describe the update to be applied. The `MATCH` attribute is used to identify what nodes require the update. Note that the type of the `MATCH` attribute is `UserQueryInput`, which is the same type used for searching for nodes in the GraphQL query above. The `SET` attribute is used to provide the new values to which the matching nodes should be set. In this example, it is a single String value for the `email` of the `User`. Note that `id`s are set only at creation. They cannot be updated later.
## Full Schema Listing
The full schema, described in pieces above, is included below:
input UserDeleteInput { DELETE: UserDeleteMutationInput MATCH: UserQueryInput }
input UserQueryInput { email: StringQueryInput id: StringQueryInput }
type Mutation { UserCreate(input: UserCreateMutationInput!, options: UserOptions): User UserDelete(input: UserDeleteInput!, options: UserOptions): Int UserUpdate(input: UserUpdateInput!, options: UserOptions): [User!] }
input UserUpdateMutationInput { email: String }
type Subscription
input UserUpdateInput { MATCH: UserQueryInput SET: UserUpdateMutationInput }
type Query { User(input: UserQueryInput, options: UserOptions): [User!] _version: String }
input UserDeleteMutationInput
type User { email: String id: ID! }
input UserCreateMutationInput { email: String id: ID }
input StringQueryInput { CONTAINS: String EQ: String GT: String GTE: String IN: [String!] LT: String LTE: String NOTCONTAINS: String NOTEQ: String NOTIN: [String!] }
Relationships
The Quickstart example used a very simple model with only one type, containing one property. The Types section explored configuration file format and the resulting GraphQL schema in more detail. However, Warpgrapher can generate create, read, update, and delete operations for relationships between types as well. The configuration below includes describes two types and a relationship between them.
version: 1
model:
- name: User
props:
- name: email
type: String
required: false
- name: Organization
props:
- name: name
type: String
required: false
rels:
- name: members
nodes: [User]
list: true
props:
- name: joinDate
type: String
required: false
The configuration above adds a second type, called Organization
. The definition of the organization type contains the rels
attribute, which was not seen in the earlier example. The rels
attribute contains a list of permissible relationships between nodes. In this case, the configuration adds a members
relationship from nodes of of the Organization
type to nodes of the User
type, indicating that certain users are members of an organization. The name
attribute in the configuration is the name of the relationship and must be unique within the scope of that type. The nodes
attribute is a list of other types that may be at the destination end of the relationship. In this case, the only type at may be a member is the User
type, but in other use cases, the destination node might be allowed to be one of several types. Lastly, the list
attribute is true
, indicating that an Organization
may have more than one member.
Relationship Configuration
The example configuration above is fairly simple, and does not make use of several optional attributes. The definition below shows the full set of configuration options that are permissible on a relationship.
model:
- name: String
rels:
- name: String
nodes: [String] # Values in the list must be other types in the model
list: Boolean
props:
- name: String
uses:
create: Boolean
query: Boolean
update: Boolean
output: Boolean
type: String # Boolean | Float | ID | Int | String
required: Boolean
list: Boolean
resolver: String
validator: String
endpoints:
read: Boolean
create: Boolean
update: Boolean
delete: Boolean
resolver: String
The snippet above shows that relationships are defined in a list under the rels
attribute within a type definition. Each relationship has a name
that must be unique within the scope of that type. The nodes
attribute is a list of name of types within the model that can appear as destination nodes in the relationship. Note that the a type may appear in its own relationship's nodes
lists. A node is permitted to have relationships to nodes of the same type.
If the list
attribute is true
, then a node may have relationships of the same type to multiple destination nodes, modeling one-to-many relationships. If list
is false, then the node may only have a single relationship of that type, to a single destination node.
The props
attribute on a relationship works the same way that the props
attribute works on nodes, except that the properties are associated with the relationship rather than with the node. See the description of the props
attribute in the section on types for more details.
Similarly, the endpoints
attribute on relationships works the same way that it does on nodes. The individual boolean attributes within the endpoints
object control whether Warpgrapher generates GraphQL schema elements for create, read, update, and delete operations. Just as with types, the default for all the boolean values is true
, meaning that by default Warpgrapher creates schema elements and resolvers for all CRUD operations.
Lastly, the resolver
attribute is also similar to the attribute of the same name on property definitions. The string in the resolver
attribute is mapped to a custom-written Rust function provided when setting up the Warpgrapher engine. This allows systems using Warpgrapher to control the behavior of resolving some relationships. Use cases for this include dynamically-generated relationships that are computed at query time rather than being stored in the back-end data store.
Generated Schema
This section describes each of the GraphQL schema elements that Warpgrapher generates for CRUD operations on relationships. Discussion of the schema elements related solely to types, absent relationships, was covered previously in the types section.
Queries in a Model with Relationships
The top level GraphQL query object includes three (3) queries. This should make intuitive sense. The model has two nodes, Organization
and User
, and one relationship, the OrganizationMembers
relationship from a source organization to a destination user that is a member of that organization. Warpgrapher's generated schema allows for querying either node type or the relationship between them. As will be discussed in detail below, the inputs to these query operations have a recursive structure, so that using the top level query for the relationship, it is possible to filter based on the properties of the source or destination nodes. Similarly, when querying for a node type, it is possible to add search parameters related to relationships, the destinations of those relationships, and so on.
type Query {
Organization(
input: OrganizationQueryInput,
options: OrganizationOptions
): [Organization!]
OrganizationMembers(
input: OrganizationMembersQueryInput,
options: OrganizationMembersOptions
): [OrganizationMembersRel!]
User(input: UserQueryInput, options: UserOptions): [User!]
_version: String
}
Querying for a Relationship
In the top level GraphQL query, note that a new query, called OrganizationMembers
has been generated for the relationship. This query has an input parameter, OrganizationMembersQueryInput
that provides search query arguments to select the set of relationships to be returned.
The OrganizationMembersQueryInput
query parameter, defined below, provides a means to search for a given instance of a relationship. It is possible to search based on an id
or set of IDs, and the joinDate
attribute allows queries based on the properties on the relationship. In addition to using the id
or another property on the relationship, the OrganizationMembersQueryInput
parameter also includes a src
and a dst
attribute. These attributes allow Warpgrapher clients to search for relationships based on properties of the source or destination nodes joined by the relationship.
input OrganizationMembersQueryInput {
dst: OrganizationMembersDstQueryInput
id: StringQueryInput
joinDate: StringQueryInput
src: OrganizationMembersSrcQueryInput
}
The two input objects for the src
and dst
input objects are shown below. Note that for the source query input, the only attribute is an Organization
attribute that is an OrganizationQueryInput
and that for the destination, the only attribute is a User
attribute that is a UserQueryInput
. There are two important observations here. First, the reason for having the OrganizationMembersDstQueryInput
object is that a relationship might have more than one node type as a possible destination. When building the GraphQL schema, Warpgrapher has to allow for the client to query any of those possible destination nodes. In this example, the only type of destination node is a User
, so that's the only possibility shown below. If the nodes list had more types of nodes, any of those node types could be queried through the OrganizationMembersDstQueryInput
. The second observation is that both the OrganizationQueryInput
and the UserQueryInput
inputs are the same input parameters used to query for a set of nodes in the Organization
and User
root level GraphQL queries shown above.
input OrganizationMembersSrcQueryInput {
Organization: OrganizationQueryInput
}
input OrganizationMembersDstQueryInput {
User: UserQueryInput
}
We'll come back to the node-based query input in a moment, in the section below on Querying for a Node. First, the code snippet below shows the schema for output from the relationship query. The relationship includes four attributes, a unique identifier for the relationship called id
, joinDate
for the property configured on the relationship, and src
and dst
attributes that represent the source and destination nodes respectively.
type OrganizationMembersRel {
dst: OrganizationMembersNodesUnion!
id: ID!
joinDate: String
src: Organization!
}
The src
attribute in the OrganizationMembersRel
output type is an Organization
type, which is exactly the same output type used for node queries, and so will be covered in the section on querying for nodes, below. The dst
attribute is a little more complex. Recall from the description of the configuration schema that Warpgrapher may connect from a source node type to a destination that can be one of many node types. A GraphQL union type is used to represent the multiple destination node types that may exist. As shown in the schema snippet below, in this example of OrganizationMembersNodesUnion
, there is only a single destination node type of User. A more complex configuration might have multiple node types in the union.
union OrganizationMembersNodesUnion = User
Note that the User
type is the same type that is used to return users in queries for nodes.
The options
argument, described above as an argument for the OrganizationMembers
query as a whole, is of type OrganizationMembersOptions
. The OrganizationMembersOptions
type has a single property, called sort
, which is a list of zero or more OrganizationMembersSort
objects. Each OrganizationMembersSort
object has two enumeration properties, direction
and orderBy
.
type OrganizationMembersOptions {
sort: [OrganizationMembersSort!]
}
type UserSort {
direction: DirectionEnum
orderBy: OrganizationMembersOrderByEnum!
}
enum DirectionEnum {
ascending
descending
}
enum OrganizationMembersOrderByEnum {
id
dst:email
}
The OrganizationMembersOrderByEnum
has variant values for each of the properties (but not relationships) on the OrganizationMembers relationship, though in this case that's only the id
property. Additionally, the enum has variants for each of the properties on the destination object, allowing the results to be sorted either by properties on the relationship itself, or those on the destination object. By including one or more values in the sort
array provided to UserOptions
, it is possible to sort results coming back from Warpgrapher. The direction
property determines whether the results are returned in ascending or descending sort order. The orderBy
field determines on which property the results are sorted. For example, above, an orderBy
field with a value of dst:email
would sort the organization member's relationship results in alphabetical order of member email addresses. If the sort
array contains more than one value, then resorts groups of results with the same first sort key are further sorted by the second key, and so on. For example, a sort
array might have entries for joinDate
and then name
to sort first by the date someone joined, and alphabetically for all people who joined on the same date.
Querying for a Node
The root GraphQL Query
object has queries for each of the node types in the configuration. To see how relationships affect node queries, have a look at the Organization
query, beginning with the OrganizationQueryInput
definition in the snippet below. In addition to the id
and name
attributes for searching based on the scalar properties of the type, the schema also includes a members
attribute, of type OrganizationMembersQueryInput
. This is the same input object described above that's used in the root level query for the OrganizationMembers
relationship. This recursive schema structure is really quite powerful, as it allows the client to query for nodes based on a combination of the node's property values, the values of properties in the relationships that it has, and the values of properties in the destination nodes at the other end of those relationships, to any level of depth. For example, it would be easy to construct a query that retrieves all of the organizations that contain a particular user as a member. For examples of relationship-based queries, see the chapter on API usage.
input OrganizationQueryInput {
id: StringQueryInput
members: OrganizationMembersQueryInput
name: StringQueryInput
}
Relationshps information can be navigated in the output type for the node, as well. The Organization
output type shown in the snippet below includes both the scalar properties on the type, the id
and name
, as well as the relationship to the members
of the Organization. The members
attribute includes an input of type OrganizationMembersQueryInput
. This is the same input type that is used to query for members relationships from the GraphQL root query, desribed above. This means that when retrieving Organization nodes, it's possible to filter the set of members that you want to retrieve in a nested query. Again, the recursive structure of the schema generated by Warpgrapher allows you the flexibility to query to any level of depth in a sub-graph that is needed.
type Organization {
id: ID!
members(input: OrganizationMembersQueryInput): [OrganizationMembersRel!]
name: String
}
Mutations in a Model with Relationships
The GraphQL schema's top level mutation object contains nine (9) mutations. This should make intuitive sense. There are three mutations (create, update, and delete), and three kinds of things that can be mutated: organization nodes, user nodes, and membership relationships between organizations and nodes. There are quite a few nested input and output types contributing to these mutations. The high-level principle to keep in mind is that Warpgrapher allows recursive operations that support manipulation of whole sub-graphs at a time. For example, node mutations have nested input objects that allow manipulation of the relationships on those nodes, and the destination nodes at the end of those relationships, and so on.
type Mutation {
OrganizationCreate(
input: OrganizationCreateMutationInput!
options: OrganizationOptions
): Organization
OrganizationDelete(input: OrganizationDeleteInput!, options: OrganizationOptions): Int
OrganizationMembersCreate(
input: OrganizationMembersCreateInput!
options: OrganizationMembersOptions
): [OrganizationMembersRel!]
OrganizationMembersDelete(
input: OrganizationMembersDeleteInput!
options: OrganizationMembersOptions
): Int
OrganizationMembersUpdate(
input: OrganizationMembersUpdateInput!
options: OrganizationMembersOptions
): [OrganizationMembersRel!]
OrganizationUpdate(
input: OrganizationUpdateInput!
options: OrganizationOptions
): [Organization!]
UserCreate(input: UserCreateMutationInput!, options: UserOptions): User
UserDelete(input: UserDeleteInput!, options: UserOptions): Int
UserUpdate(input: UserUpdateInput!, options: UserOptions): [User!]
}
Mutating a Relationship
Creating a Relationship
The snippet below contains the input for creation of one or more OrganizationMembers relationships. There are two attributes, MATCH
and CREATE
. The MATCH
attribute is used to identify the organization or organizations that should be matched as the source of the relationship(s) to be created. It has the same type, OrganizationQueryInput
that is used to query for nodes using the Organization
query under the GraphQL Query
root described above. The match query may select more than one node, allowing similar relationships to be created in bulk. Matching existing source nodes is the only option when creating a relationship. If it is necessary to create the node at the source end of the relationship, see the node creation operation, in this case OrganizationCreate
instead.
input OrganizationMembersCreateInput {
CREATE: [OrganizationMembersCreateMutationInput!]
MATCH: OrganizationQueryInput
}
The CREATE
attribute has a type of OrganizationMembersCreateMutationInput
. That input structure is shown in the schema snippet below. It includes the joinDate
attribute on the relationship. The id
object is accepted as an input to facilitate offline operation, in which the client may need to choose the unique identifier for the relationship. If the client does not choose the identifier, it will be randomly assigned by the Warpgrapher service.
input OrganizationMembersCreateMutationInput {
dst: OrganizationMembersNodesMutationInputUnion!
id: ID
joinDate: String
}
The dst
property in the OrganizationMembersCreateMutationInput
above is of type OrganizationMembersNodesMutationInputUnion
, which is included in the schema snippet below. Don't be intimidated by the lengthy name of the union type. Recall that in the configuration above, the destination type of a relationship is allowed to have more than one type. In this configuration, it only has one type, but the OrganizationMembersNodesMutationInputUnion
is what allows the destination of the relationship to have multiple types. In this case, the only option is User
, with a type of UserInput
.
input OrganizationMembersNodesMutationInputUnion {
User: UserInput
}
The UserInput
type, which provides the destination node for the relationship(s) to be created, has two attributes. When using the EXISTING
attribute, Warpgrapher search the graph database for a set of nodes matching the UserQueryInput
search criteria and uses the results as the destination nodes for creation of the relationship(s). Note that this UserQueryInput
type is the same input type that is used to query for users in the user query under the GraphQL root Query
. No matter where in the recursive hierarchy, searhing for User
nodes always uses the same input. The NEW
attribute creates a new User
node as the destination of the relationship. Note that the UserCreateMutationInput
input type is the same input type used to create a User
node in the UserCreate
mutation under the GraphQL root Mutation
object.
input UserInput {
EXISTING: UserQueryInput
NEW: UserCreateMutationInput
}
The output of creating one or more relationships, OrganizationMembersRel
, is the same output type returned from querying for the organization's members relationship, as was described in the section on queries, above. It contains the newly created relationship.
Updating a Relationship
The input for a relationship update mutation, OrganizationMembersUpdateInput
is shown in the schema snippet below. The update input consists of two parts. The MATCH
attribute is a query input to identify the relationships that should be updated. Note that the match input type, OrganizationMembersQueryInput
is the same input type used to provide search parameters when searching for relationships under the OrganizationMembers
query under the GraphQL root Query
object. The SET
attribute is used to describe the changes that should be made to values in the relationship(s) matched by the MATCH
parameter, and potentially the sub-graph beneath.
input OrganizationMembersUpdateInput {
MATCH: OrganizationMembersQueryInput
SET: OrganizationMembersUpdateMutationInput!
}
The SET
input is of type OrganizationMembersUpdateMutationInput
, shown in the snippet below. The joinDate
attribute is the same input type used during relationship creation operations, described in the section above. The src
and dst
attributes allow a single update to provide new values not only for the relationship properties, but also properties on the source and destination nodes at the ends of the relationship.
input OrganizationMembersUpdateMutationInput {
dst: OrganizationMembersDstUpdateMutationInput
joinDate: String
src: OrganizationMembersSrcUpdateMutationInput
}
The source and destination node input types are shown in the schema snippets below. Note that the types, OrganizationUpdateMutationInput
and UserUpdateMutationInput
are the same input types used for the SET
attributes in the single node update operation, described in in the section on single-node mutation operations below. Thus, we have hit the point where the GraphQL schema structure that Warpgrapher generates is recursive. A relationship update mutation can update the properties on the relationship, as described just above, or using this recursive input structure, reach down into the source and destination nodes at the ends of the relationship and edit their properties as well.
input OrganizationMembersSrcUpdateMutationInput {
Organization: OrganizationUpdateMutationInput
}
input OrganizationMembersDstUpdateMutationInput {
User: UserUpdateMutationInput
}
The output for updating one or more relationships, OrganizationMembersRel
, is the same output type returned from querying for an organization's members relationship, as was described in the section on queries, above. For update operations, it returns the list of relationships that were updated in the mutation.
Deleting a Relationship
The input for a relationship delete mutation, OrganizationMembersDeleteInput
, is shown in the schema snippet below. The MATCH
attribute is used to query for the relationships that are desired to be deleted. Note that the input type, OrganizationMembersQueryInput
is the same input type used to query for relationships under the relationship query in the GraphQL root Query
object, described in the section on querying, above.
input OrganizationMembersDeleteInput {
MATCH: OrganizationMembersQueryInput
dst: OrganizationMembersDstDeleteMutationInput
src: OrganizationMembersSrcDeleteMutationInput
}
The src and destination delete mutation inputs are not particularly interesting for this simple schema. The input type for the src of the relationship contains a single Organization
attribute that has the same type as the deletion input for an OrganizationDelete
mutation. However, the only option in that type is deletion of members, which is what is already being done. On the destination side, because the User
type has no relationships of its own, the UserDeleteMutationInput
object is empty altogether. Thus, for the most part, the src
and dst
attriubtes on the OrganizationMembersDeleteInput
are not particularly useful, though in more complex models, they allows the possibility of deleting multiple nodes and relationships in a single query.
input OrganizationMembersSrcDeleteMutationInput {
Organization: OrganizationDeleteMutationInput
}
input OrganizationMembersDstDeleteMutationInput {
User: UserDeleteMutationInput
}
input OrganizationDeleteMutationInput {
members: [OrganizationMembersDeleteInput!]
}
input UserDeleteMutationInput
The output from the relationship deletion mutation is an integer with a count of the relationships deleted.
Mutating a Node
In many ways, modifying a node in a data model that includes relationships is similar to what was described in the node-only portion of the book, previously. Thus, this section doesn't repeat that same content, instead focusing only on the changes the come from having a relationship in the mix.
Creating a Node
The snippet below contains the input for creation of an organization. Note the members
attribute, of type OrganizationMembersCreateMutationInput
, which allows for the creation of members attributes in the same mutation that creates the organization. The OrganizationMembersCreateMutationInput
input type is the same one that is used for the CREATE
attribute in the OrganizationMembersCreate
mutation under the root GraphQL mutation
object. Thus, when creating a node, you can create members for it using the same full flexbility provided by the mutation dedicated to creating relationships. The recursive nature of the creation inputs allows for the creation of entire sub-graphs.
input OrganizationCreateMutationInput {
id: ID
members: [OrganizationMembersCreateMutationInput!]
name: String
}
The rest of the inputs and output for the node creation mutation are the same as those described previously for a simpler model without relationships.
Updating a Node
The OrganizationUpdateInput
for making changes to organizations looks similar to the input types used for objects that don't have relationships. It has a MATCH
attribute to select the objects to update, and a SET
attribute to describe the changes to be made. The difference is in the
input OrganizationUpdateInput {
MATCH: OrganizationQueryInput
SET: OrganizationUpdateMutationInput
}
input OrganizationUpdateMutationInput {
members: [OrganizationMembersChangeInput!]
name: String
}
The differences for the inclusion of relationships begin in the OrganizationUpdateMutationInput
input type used to set new values for the nodes to be updated, which includes a members
attribute of type OrganizationMembersChangeInput
. There are three changes one could make to a relationship: add one or more new relationships to destination nodes, delete one or more relationships to destination nodes, or keep the relationships to the same set of destination nodes but make changes to the properties of one or more of those destination nodes. Those options are captured in the OrganizationMembersChangeInput
input type in the schema snippet below.
input OrganizationMembersChangeInput {
ADD: OrganizationMembersCreateMutationInput
DELETE: OrganizationMembersDeleteInput
UPDATE: OrganizationMembersUpdateInput
}
The OrganizationMembersCreateMutationInput
input type for the ADD
operation is the same one that was described above as the CREATE
attribute the section on mutations to create new relationships. This makes sense, as in this context it is already clear what the source node or nodes are, and the ADD
attribute need only create the new relationships to be added. Similarly, the OrganizationMembersDeleteInput
used for the DELETE
attribute here is the same one that is used for the OrganizationMembersDelete
operation under the root GraphQL Mutation
type. The match will be scoped to the relationships under the source node(s) selected by the OrganizationUpdateInput
MATCH
query. As expected, the same is true for the OrganizationMembersUpdateInput
input type used for the UPDATE
attribute. It's the same as the input used for the OrganizationMembersUpdate
mutation under the root GraphQL Mutation
type.
Deleting a Node
The OrganizationDeleteInput
input type, shown in the schema snippet below, looks similar to the one for nodes without relationships. However, the OrganizationDeleteMutationInput
is different, as it includes a members
attribute of type OrganizationMembersDeleteInput
, which is the same type used for the OrganizationMembersDelete
mutation under the GraphQL root Mutation
type. In the case of this model, this additional input does little. In a more complex model with multiple types of relationships, however, it would allow for deletion of whole subgraphs of nodes and relationships.
input OrganizationDeleteInput {
DELETE: OrganizationDeleteMutationInput
MATCH: OrganizationQueryInput
}
input OrganizationDeleteMutationInput {
members: [OrganizationMembersDeleteInput!]
}
Full Schema Listing
The full schema for the example above is included below.
input OrganizationMembersDeleteInput {
MATCH: OrganizationMembersQueryInput
dst: OrganizationMembersDstDeleteMutationInput
src: OrganizationMembersSrcDeleteMutationInput
}
input OrganizationCreateMutationInput {
id: ID
members: [OrganizationMembersCreateMutationInput!]
name: String
}
input OrganizationMembersCreateInput {
CREATE: [OrganizationMembersCreateMutationInput!]
MATCH: OrganizationQueryInput
}
input OrganizationMembersSrcQueryInput {
Organization: OrganizationQueryInput
}
type Mutation {
OrganizationCreate(
input: OrganizationCreateMutationInput!
options: OrganizationOptions
): Organization
OrganizationDelete(input: OrganizationDeleteInput!, options: OrganizationOptions): Int
OrganizationMembersCreate(
input: OrganizationMembersCreateInput!
options: OrganizationMembersOptions
): [OrganizationMembersRel!]
OrganizationMembersDelete(
input: OrganizationMembersDeleteInput!
options: OrganizationMembersOptions
): Int
OrganizationMembersUpdate(
input: OrganizationMembersUpdateInput!
options: OrganizationMembersOptions
): [OrganizationMembersRel!]
OrganizationUpdate(
input: OrganizationUpdateInput!
options: OrganizationOptions
): [Organization!]
UserCreate(input: UserCreateMutationInput!, options: UserOptions): User
UserDelete(input: UserDeleteInput!, options: UserOptions): Int
UserUpdate(input: UserUpdateInput!, options: UserOptions): [User!]
}
input OrganizationMembersChangeInput {
ADD: OrganizationMembersCreateMutationInput
DELETE: OrganizationMembersDeleteInput
UPDATE: OrganizationMembersUpdateInput
}
input UserUpdateMutationInput {
email: String
}
input UserDeleteInput {
DELETE: UserDeleteMutationInput
MATCH: UserQueryInput
}
input OrganizationMembersNodesMutationInputUnion {
User: UserInput
}
input UserInput {
EXISTING: UserQueryInput
NEW: UserCreateMutationInput
}
input OrganizationQueryInput {
id: StringQueryInput
members: OrganizationMembersQueryInput
name: StringQueryInput
}
union OrganizationMembersNodesUnion = User
type Query {
Organization(
input: OrganizationQueryInput
options: OrganizationOptions
): [Organization!]
OrganizationMembers(
input: OrganizationMembersQueryInput
options: OrganizationOptions
): [OrganizationMembersRel!]
User(input: UserQueryInput, options: UserOptions): [User!]
_version: String
}
input OrganizationMembersDstDeleteMutationInput {
User: UserDeleteMutationInput
}
input OrganizationMembersUpdateInput {
MATCH: OrganizationMembersQueryInput
SET: OrganizationMembersUpdateMutationInput!
}
input OrganizationMembersSrcUpdateMutationInput {
Organization: OrganizationUpdateMutationInput
}
input UserUpdateInput {
MATCH: UserQueryInput
SET: UserUpdateMutationInput
}
input OrganizationUpdateInput {
MATCH: OrganizationQueryInput
SET: OrganizationUpdateMutationInput
}
type OrganizationMembersRel {
dst: OrganizationMembersNodesUnion!
id: ID!
joinDate: String
src: Organization!
}
input OrganizationMembersUpdateMutationInput {
dst: OrganizationMembersDstUpdateMutationInput
joinDate: String
src: OrganizationMembersSrcUpdateMutationInput
}
input OrganizationMembersSrcDeleteMutationInput {
Organization: OrganizationDeleteMutationInput
}
input UserQueryInput {
email: StringQueryInput
id: StringQueryInput
}
input OrganizationMembersQueryInput {
dst: OrganizationMembersDstQueryInput
id: StringQueryInput
joinDate: StringQueryInput
src: OrganizationMembersSrcQueryInput
}
input OrganizationDeleteMutationInput {
members: [OrganizationMembersDeleteInput!]
}
type Organization {
id: ID!
members(input: OrganizationMembersQueryInput): [OrganizationMembersRel!]
name: String
}
input OrganizationUpdateMutationInput {
members: [OrganizationMembersChangeInput!]
name: String
}
type Subscription
input OrganizationMembersCreateMutationInput {
dst: OrganizationMembersNodesMutationInputUnion!
id: ID
joinDate: String
}
input UserDeleteMutationInput
input OrganizationMembersDstUpdateMutationInput {
User: UserUpdateMutationInput
}
type User {
email: String
id: ID!
}
input OrganizationMembersDstQueryInput {
User: UserQueryInput
}
input UserCreateMutationInput {
email: String
id: ID
}
input StringQueryInput {
CONTAINS: String
EQ: String
GT: String
GTE: String
IN: [String!]
LT: String
LTE: String
NOTCONTAINS: String
NOTEQ: String
NOTIN: [String!]
}
input OrganizationDeleteInput {
DELETE: OrganizationDeleteMutationInput
MATCH: OrganizationQueryInput
}
Warpgrapher CRUD API
One of the primary features of Warpgrapher is the auto-generation of CRUD operations for all Types. This includes basic and advanced queries that support nested operations and graph traversals. The schema itself was described in the preceeding sections. This chapter provides a set of usage examples for the various queries and mutations, for both nodes and relationships.
For for more details on general GraphQL syntax, see: https://graphql.org/learn/.
Node Create
The GraphQL API examples below use the example schema described in the Relationships section of the book. The unique IDs for nodes and relationships in the examples below may differ than other sections and chapters of the book.
Node with No Relationships
The GraphQL query below creates a new organization.
mutation {
OrganizationCreate(input: { name: "Warpforge" }) {
id
name
}
}
The output is as follows.
{
"data": {
"OrganizationCreate": {
"id": "edff7816-f40c-4be1-904a-b7ab62e60be1",
"name": "Warpforge"
}
}
}
Node Related to a New Node
The GraphQL query below creates a new organization with a relationship to a member who is a new user.
mutation {
OrganizationCreate(
input: {
name: "Just Us League"
members: {
joinDate: "2020-02-20",
dst: { User: { NEW: { email: "alistair@example.com" } } }
}
}
) {
id
name
members {
id
joinDate
dst {
... on User {
id
email
}
}
}
}
}
The output is as follows.
{
"data": {
"OrganizationCreate": {
"id": "a33ab37b-af51-4ccd-88ee-7d4d6eb75de9",
"name": "Just Us League",
"members": [
{
"id": "295d191f-0d66-484c-b1eb-39494f0ae8a0",
"joinDate": "2020-02-20",
"dst": {
"id": "5ca84494-dd14-468e-812f-cb2da07157db",
"email": "alistair@example.com"
}
}
]
}
}
}
Node Related to an Existing Node
The GraphQL query below creates a new organization with new relationship to an existing member, alistair@example.com, the same user created in the example above.
mutation {
OrganizationCreate(
input: {
name: "Consortia Unlimited"
members: {
joinDate: "2020-02-20",
dst: { User: { EXISTING: { email: "alistair@example.com" } } }
}
}
) {
id
name
members {
id
joinDate
dst {
... on User {
id
email
}
}
}
}
}
The output is as follows:
{
"data": {
"OrganizationCreate": {
"id": "9ecef884-2afc-457e-8486-e1f84c761050",
"name": "Consortia Unlimited",
"members": [
{
"id": "008fdc43-f3cf-48eb-a9e9-c5c753c65ee9",
"joinDate": "2020-02-20",
"dst": {
"id": "5ca84494-dd14-468e-812f-cb2da07157db",
"email": "alistair@example.com"
}
}
]
}
}
}
Note that the id for the member in this example is the same as that in the last example, because the relationship was created to the same user.
Node Read
The GraphQL API examples below use the example schema described in the Relationships section of the book. The unique IDs for nodes and relationships in the examples below may differ than other sections and chapters of the book.
- All Nodes
- Node with Matching Properties
- Node with Matching Relationships
- Node with Matching Destinations
All Nodes
The GraphQL query below lists all organizations.
query {
Organization {
id
name
}
}
The output is as follows.
{
"data": {
"Organization": [
{
"id": "85faa40f-04a8-4f0a-ae44-804604b4ef4c",
"name": "Just Us League"
},
{
"id": "5692bd2a-2bc9-4497-8285-1f7860478cd6",
"name": "Consortia Unlimited"
},
{
"id": "1eea1d47-1fe8-4bed-9116-e0037fbdb296",
"name": "Warpforge"
}
]
}
}
Node with Matching Properties
The GraphQL query below lists all organizations with the name Warpforge
.
query {
Organization(input: { name: { EQ: "Warpforge" } }) {
id
name
}
}
The output is as follows.
{
"data": {
"Organization": [
{
"id": "1eea1d47-1fe8-4bed-9116-e0037fbdb296",
"name": "Warpforge"
}
]
}
}
Node with Matching Relationships
The GraphQL query below lists all organizations with members that joined in 2020.
query {
Organization(
input: { members: { joinDate: { CONTAINS: "2020" } } }
) {
id
name
members {
joinDate
dst {
... on User {
id
email
}
}
}
}
}
The output is as follows:
{
"data": {
"Organization": [
{
"id": "85faa40f-04a8-4f0a-ae44-804604b4ef4c",
"name": "Just Us League",
"members": [
{
"joinDate": "2020-02-20",
"dst": {
"id": "de5e58cd-eb5e-4bf8-8a7a-9656999f4013",
"email": "alistair@example.com"
}
}
]
},
{
"id": "5692bd2a-2bc9-4497-8285-1f7860478cd6",
"name": "Consortia Unlimited",
"members": [
{
"joinDate": "2020-02-20",
"dst": {
"id": "de5e58cd-eb5e-4bf8-8a7a-9656999f4013",
"email": "alistair@example.com"
}
}
]
}
]
}
}
Node with Matching Destinations
The GraphQL query below lists all the organizations of which the user alistair@example.com
is a member.
query {
Organization(
input: {
members: { dst: { User: { email: { EQ: "alistair@example.com" } } } }
}
) {
id
name
members {
joinDate
dst {
... on User {
id
email
}
}
}
}
}
The output is as follows:
{
"data": {
"Organization": [
{
"id": "85faa40f-04a8-4f0a-ae44-804604b4ef4c",
"name": "Just Us League",
"members": [
{
"joinDate": "2020-02-20",
"dst": {
"id": "de5e58cd-eb5e-4bf8-8a7a-9656999f4013",
"email": "alistair@example.com"
}
}
]
},
{
"id": "5692bd2a-2bc9-4497-8285-1f7860478cd6",
"name": "Consortia Unlimited",
"members": [
{
"joinDate": "2020-02-20",
"dst": {
"id": "de5e58cd-eb5e-4bf8-8a7a-9656999f4013",
"email": "alistair@example.com"
}
}
]
}
]
}
}
Node Update
The GraphQL API examples below use the example schema described in the Relationships section of the book. The unique IDs for nodes and relationships in the examples below may differ than other sections and chapters of the book.
- Match Node Properties
- Match Destination Properties
- Add a Destination Node
- Update a Destination Node
- Delete a Relationship
Match Node Properties
The GraphQL query below match a node based on its properties and updates it.
mutation {
OrganizationUpdate(
input: {
MATCH: { name: { EQ: "Warpforge" } }
SET: { name: "Harsh Truth Heavy Industries" }
}
) {
id
name
}
}
The output is as follows.
{
"data": {
"OrganizationUpdate": [
{
"id": "1eea1d47-1fe8-4bed-9116-e0037fbdb296",
"name": "Harsh Truth Heavy Industries"
}
]
}
}
Match Destination Properties
The GraphQL query below matches a node based on properties on a desination node to which it is related, then updates it.
mutation {
OrganizationUpdate(
input: {
MATCH: {
members: { dst: { User: { email: { EQ: "balthazar@example.com" } } } }
}
SET: { name: "Prophet and Loss Inc." }
}
) {
id
name
members {
id
joinDate
dst {
... on User {
id
email
}
}
}
}
}
The output is as follows.
{
"data": {
"OrganizationUpdate": [
{
"id": "5692bd2a-2bc9-4497-8285-1f7860478cd6",
"name": "Prophet and Loss Inc.",
"members": [
{
"id": "78acc7ac-2153-413d-a8d7-688e472340d5",
"joinDate": "2021-01-02",
"dst": {
"id": "ea2a1b68-fda2-4adb-9c80-554761a1c97b",
"email": "balthazar@example.com"
}
},
{
"id": "00051bc1-133c-445d-b00c-4faf61b2bffa",
"joinDate": "2020-02-20",
"dst": {
"id": "de5e58cd-eb5e-4bf8-8a7a-9656999f4013",
"email": "alistair@example.com"
}
}
]
}
]
}
}
Add a Destination Node
The GraphQL query below updates the Warpforge organization to add an additional, newly created user. If an EXISTING
attribute were used in place of NEW
in the query below, one could query for existing users to add to the organization.
mutation {
OrganizationUpdate(
input: {
MATCH: { name: { EQ: "Warpforge" } }
SET: {
members: {
ADD: {
joinDate: "2018-01-08",
dst: { User: { NEW: { email: "constantine@example.com" } } }
}
}
}
}
) {
id
name
members {
id
joinDate
dst {
... on User {
id
email
}
}
}
}
}
The output is as follows.
{
"data": {
"OrganizationUpdate": [
{
"id": "85faa40f-04a8-4f0a-ae44-804604b4ef4c",
"name": "Warpforge",
"members": [
{
"id": "38cd72c8-75b5-4547-9829-38d6a6854eb9",
"joinDate": "2018-01-08",
"dst": {
"id": "f2e894bf-e98e-48a7-b16a-adc95cd34ac3",
"email": "constantine@example.com"
}
},
{
"id": "bd302b7f-8a3f-49ab-aac3-c3348d8b8d94",
"joinDate": "2020-02-20",
"dst": {
"id": "de5e58cd-eb5e-4bf8-8a7a-9656999f4013",
"email": "alistair@example.com"
}
}
]
}
]
}
}
Update a Destination Node
The GraphQL query below updates a value on a destination node.
mutation {
OrganizationUpdate(
input: {
MATCH: { name: { EQ: "Warpforge" } }
SET: {
members: {
UPDATE: {
MATCH: {
dst: { User: { email: { EQ: "constantine@example.com" } } }
}
SET: { dst: { User: { email: "javier@example.com" } } }
}
}
}
}
) {
id
name
members {
id
joinDate
dst {
... on User {
id
email
}
}
}
}
}
The output is as follows.
{
"data": {
"OrganizationUpdate": [
{
"id": "85faa40f-04a8-4f0a-ae44-804604b4ef4c",
"name": "Warpforge",
"members": [
{
"id": "38cd72c8-75b5-4547-9829-38d6a6854eb9",
"joinDate": "2018-01-08",
"dst": {
"id": "f2e894bf-e98e-48a7-b16a-adc95cd34ac3",
"email": "javier@example.com"
}
},
{
"id": "bd302b7f-8a3f-49ab-aac3-c3348d8b8d94",
"joinDate": "2020-02-20",
"dst": {
"id": "de5e58cd-eb5e-4bf8-8a7a-9656999f4013",
"email": "alistair@example.com"
}
}
]
}
]
}
}
Delete a Relationship
The GraphQL query below deletes the relationship from the Warpforge organization to alistair@example.com, removing them as a member of the organization.
mutation {
OrganizationUpdate(
input: {
MATCH: { name: { EQ: "Warpforge" } }
SET: {
members: {
DELETE: {
MATCH: { dst: { User: { email: { EQ: "alistair@example.com" } } } }
}
}
}
}
) {
id
name
members {
id
joinDate
dst {
... on User {
id
email
}
}
}
}
}
The output is as follows.
{
"data": {
"OrganizationUpdate": [
{
"id": "85faa40f-04a8-4f0a-ae44-804604b4ef4c",
"name": "Warpforge",
"members": [
{
"id": "38cd72c8-75b5-4547-9829-38d6a6854eb9",
"joinDate": "2018-01-08",
"dst": {
"id": "f2e894bf-e98e-48a7-b16a-adc95cd34ac3",
"email": "javier@example.com"
}
}
]
}
]
}
}
Node Delete
The GraphQL API examples below use the example schema described in the Relationships section of the book. The unique IDs for nodes and relationships in the examples below may differ than other sections and chapters of the book.
Node with Matching Properties
The GraphQL query below deletes a node based on matching against its properties.
mutation {
OrganizationDelete(
input: { MATCH: { name: { EQ: "Harsh Truth Heavy Industries" } } }
)
}
The output is as follows, indicating that one organization was successfully deleted.
{
"data": {
"OrganizationDelete": 1
}
}
Relationship Create
The GraphQL API examples below use the example schema described in the Relationships section of the book. The unique IDs for nodes and relationships in the examples below may differ than other sections and chapters of the book.
Between Existing Nodes
The GraphQL query below creates a new membership relationship between two existing nodes, adding alistair@example.com to the Warpforge projct.
mutation {
OrganizationMembersCreate(
input: {
MATCH: { name: { EQ: "Warpforge" } }
CREATE: {
joinDate: "2022-01-28",
dst: { User: { EXISTING: { email: { EQ: "alistair@example.com" } } } }
}
}
) {
id
joinDate
src {
id
name
}
dst {
... on User {
id
email
}
}
}
}
The output is as follows.
{
"data": {
"OrganizationMembersCreate": [
{
"id": "21173765-b2a3-4bb1-bfa7-5787ef17d6a8",
"joinDate": "2022-01-28",
"src": {
"id": "85faa40f-04a8-4f0a-ae44-804604b4ef4c",
"name": "Warpforge"
},
"dst": {
"id": "de5e58cd-eb5e-4bf8-8a7a-9656999f4013",
"email": "alistair@example.com"
}
}
]
}
}
From an Existing to a New Node
The GraphQL below creates a new membership relationship from an existing organization to a newly created user.
mutation {
OrganizationMembersCreate(
input: {
MATCH: { name: { EQ: "Warpforge" } }
CREATE: {
joinDate: "2022-01-28",
dst: { User: { NEW: { email: "constantine@example.com" } } }
}
}
) {
id
joinDate
src {
id
name
}
dst {
... on User {
id
email
}
}
}
}
The output is as follows.
{
"data": {
"OrganizationMembersCreate": [
{
"id": "3ab33be6-16a3-4e50-87b5-3bb7d195ea54",
"joinDate": "2022-01-28",
"src": {
"id": "85faa40f-04a8-4f0a-ae44-804604b4ef4c",
"name": "Warpforge"
},
"dst": {
"id": "c2b71308-2fd7-4d43-b037-30ec473e90a5",
"email": "constantine@example.com"
}
}
]
}
}
Relationship Read
The GraphQL API examples below use the example schema described in the Relationships section of the book. The unique IDs for nodes and relationships in the examples below may differ than other sections and chapters of the book.
By Relationship Properties
The GraphQL query below retrieves all the members who joined organizations on 2018-01-08.
query {
OrganizationMembers(input: { joinDate: { EQ: "2018-01-08" } }) {
id
joinDate
src {
id
name
}
dst {
... on User {
id
email
}
}
}
}
The output is as follows.
{
"data": {
"OrganizationMembers": [
{
"id": "38cd72c8-75b5-4547-9829-38d6a6854eb9",
"joinDate": "2018-01-08",
"src": {
"id": "85faa40f-04a8-4f0a-ae44-804604b4ef4c",
"name": "Warpforge"
},
"dst": {
"id": "f2e894bf-e98e-48a7-b16a-adc95cd34ac3",
"email": "javier@example.com"
}
}
]
}
}
By Source Node
The GraphQL query below retrieves all the members of users in the Warpforge organization.
query {
OrganizationMembers(
input: { src: { Organization: { name: { EQ: "Warpforge" } } } }
) {
id
joinDate
src {
id
name
}
dst {
... on User {
id
email
}
}
}
}
The output is as follows.
{
"data": {
"OrganizationMembers": [
{
"id": "3ab33be6-16a3-4e50-87b5-3bb7d195ea54",
"joinDate": "2022-01-28",
"src": {
"id": "85faa40f-04a8-4f0a-ae44-804604b4ef4c",
"name": "Warpforge"
},
"dst": {
"id": "c2b71308-2fd7-4d43-b037-30ec473e90a5",
"email": "constantine@example.com"
}
},
{
"id": "21173765-b2a3-4bb1-bfa7-5787ef17d6a8",
"joinDate": "2022-01-28",
"src": {
"id": "85faa40f-04a8-4f0a-ae44-804604b4ef4c",
"name": "Warpforge"
},
"dst": {
"id": "de5e58cd-eb5e-4bf8-8a7a-9656999f4013",
"email": "alistair@example.com"
}
},
{
"id": "38cd72c8-75b5-4547-9829-38d6a6854eb9",
"joinDate": "2018-01-08",
"src": {
"id": "85faa40f-04a8-4f0a-ae44-804604b4ef4c",
"name": "Warpforge"
},
"dst": {
"id": "f2e894bf-e98e-48a7-b16a-adc95cd34ac3",
"email": "javier@example.com"
}
}
]
}
}
By Destination Node
The GraphQL query below retrieves all of the members of alistair@example.com.
query {
OrganizationMembers(
input: { dst: { User: { email: { EQ: "alistair@example.com" } } } }
) {
id
joinDate
src {
id
name
}
dst {
... on User {
id
email
}
}
}
}
The output is as follows.
{
"data": {
"OrganizationMembers": [
{
"id": "21173765-b2a3-4bb1-bfa7-5787ef17d6a8",
"joinDate": "2022-01-28",
"src": {
"id": "85faa40f-04a8-4f0a-ae44-804604b4ef4c",
"name": "Warpforge"
},
"dst": {
"id": "de5e58cd-eb5e-4bf8-8a7a-9656999f4013",
"email": "alistair@example.com"
}
},
{
"id": "00051bc1-133c-445d-b00c-4faf61b2bffa",
"joinDate": "2020-02-20",
"src": {
"id": "5692bd2a-2bc9-4497-8285-1f7860478cd6",
"name": "Prophet and Loss Inc."
},
"dst": {
"id": "de5e58cd-eb5e-4bf8-8a7a-9656999f4013",
"email": "alistair@example.com"
}
}
]
}
}
Relationship Update
Update Relationship Properties
The GraphQL updates the date on a membership.
mutation {
OrganizationMembersUpdate(
input: {
MATCH: {
src: { Organization: { name: "Warpforge" } }
dst: { User: { email: "alistair@example.com" } }
}
SET: { joinDate: "2021-12-31" }
}
) {
id
joinDate
src {
id
name
}
dst {
... on User {
id
email
}
}
}
}
The output is as follows.
{
"data": {
"OrganizationMembersUpdate": [
{
"id": "21173765-b2a3-4bb1-bfa7-5787ef17d6a8",
"joinDate": "2021-12-31",
"src": {
"id": "85faa40f-04a8-4f0a-ae44-804604b4ef4c",
"name": "Warpforge"
},
"dst": {
"id": "de5e58cd-eb5e-4bf8-8a7a-9656999f4013",
"email": "alistair@example.com"
}
}
]
}
}
Relationship Delete
Delete relationship
The GraphQL
mutation {
OrganizationMembersDelete(
input: {
MATCH: {
src: { Organization: { name: { EQ: "Warpforge" } } }
dst: { User: { email: { EQ: "constantine@example.com" } } }
}
}
)
}
The output is as follows.
{
"data": {
"OrganizationMembersDelete": 1
}
}
Engine Features
As shown in the previous chapters, Warpgrapher offers a considerable amount of power and flexibility out of the box. With a configuration describing a data model and a little bit of server integration code, it is possible to stand up a fully functional GraphQL service, with automatically generated CRUD operations for nodes and relationships. The recursive structure of the schema allows for single queries and mutations over arbitrarily deep sub-graphs.
However, these features by themselves are still not enough for a production service. A robust production system will doubtless need to validate inputs. It will likely need to conduct authorization checks on requests as a security control. It will need to include business logic that does calculations over existing data or launches additional workflows. This chapter discusses the extensibility features of the Warpgrapher engine make it possible to build a real GraphQL API service on top of the foundation that Warpgrapher provides.
Static Endpoints
Warpgrapher includes built-in static endpoints that provide useful information or functionality. Built-in static endpoints names are preceded by _
.
Version
If the Engine
is built with an explicit version:
#![allow(unused)] fn main() { let mut server: Engine<()> = Engine::new(config, db) .with_version("0.1.0".to_string()) .build(); }
the version value can be accessed via the _version
endpoint:
query {
_version
}
{
"data": {
"_version": "0.1.0"
}
}
If the server is not configured with an explicit version, the _version
endpoint will return null
:
{
"data": {
"_version": null
}
}
Defined Endpoints
In addition to the CRUD endpoints auto-generated for each type, Warpgrapher provides the ability to define additional custom endpoints.
Configuration
The schema for an endpoint entry in the Warpgrapher configuration is as follows.
endpoints:
- name: String
class: String # /Mutation | Query/
input: # null if there is no input parameter
type: String
list: Boolean
required: Boolean
output: # null if there is no input parameter
type: String
list: Boolean # defaults to false
required: Boolean # defaults to false
The name
of the endpoint will be used later as the key to a hash of endpoint resolution fuctions. It uniquely identified this endpoint. The class
attribute tells Warpgrapher whether this endpoint belongs under the root query or root mutation object. The convention is that any operation with side effects, modifying the persistent data store, should be a mutation. Read-only operations are queries. The input
attribute allows specification of an input to the endpoint function. The input type may be a scalar GraphQL type -- Boolean
, Float
, ID
, Int
, or String
-- or it may be a type defined elsewhere in the model
section of the Warpgrapher configuration. The list
determines whether the input is actually a list of that type rather than a singular instance. If the required
attribute is true, the input is required. If false
, the input is optional. The output
attribute describes the value returned by the custom endpoint. It has fields similar to input
, in that it includes type
, lsit
, and required
attributes.
The following configuration defines a custom endpoints, TopIssue
.
version: 1
model:
- name: Issue
props:
- name: name
type: String
- name: points
type: Int
endpoints:
- name: TopIssue
class: Query
input: null
output:
type: Issue
Implementation
To implement the custom endpoint, a resolver function is defined, as follows. In this example, the function just puts together a static response and resolves it. A real system would like do some comparison of nodes and relationships to determine the top issue, and dynamically return that record.
#![allow(unused)] fn main() { // endpoint returning a list of `Issue` nodes fn resolve_top_issue(facade: ResolverFacade<AppRequestContext>) -> BoxFuture<ExecutionResult> { Box::pin(async move { let top_issue = facade.node( "Issue", hashmap! { "name".to_string() => Value::from("Learn more rust".to_string()), "points".to_string() => Value::from(Into::<i64>::into(5)) }, ); facade.resolve_node(&top_issue).await }) } }
Add Resolvers to the Warpgrapher Engine
To add the custom endpoint resolver to the engine, it must be associated with the name the endpoint was given in the configuration above. The example code below creates a HashMap
to map from the custom endpoint name and the implementing function. That map is then passed to the Engine
when it is created.
#![allow(unused)] fn main() { // define resolvers let mut resolvers = Resolvers::<AppRequestContext>::new(); resolvers.insert("TopIssue".to_string(), Box::new(resolve_top_issue)); // create warpgrapher engine let engine: Engine<AppRequestContext> = Engine::new(config, db) .with_resolvers(resolvers) .build() .expect("Failed to build engine"); }
Example of Calling the Endpoint
The code below calls the endine with a query that exercises the custom endpoint.
let query = "
query {
TopIssue {
name
points
}
}
"
.to_string();
let metadata = HashMap::new();
let result = engine.execute(query, None, metadata).await.unwrap();
Full Example Source
See below for the full code for the example above.
use maplit::hashmap;
use std::collections::HashMap;
use std::convert::TryFrom;
use warpgrapher::engine::config::Configuration;
use warpgrapher::engine::context::RequestContext;
use warpgrapher::engine::database::cypher::CypherEndpoint;
use warpgrapher::engine::database::DatabaseEndpoint;
use warpgrapher::engine::resolvers::{ExecutionResult, ResolverFacade, Resolvers};
use warpgrapher::engine::value::Value;
use warpgrapher::juniper::BoxFuture;
use warpgrapher::Engine;
static CONFIG: &str = "
version: 1
model:
- name: Issue
props:
- name: name
type: String
- name: points
type: Int
endpoints:
- name: TopIssue
class: Query
input: null
output:
type: Issue
";
#[derive(Clone, Debug)]
struct AppRequestContext {}
impl RequestContext for AppRequestContext {
type DBEndpointType = CypherEndpoint;
fn new() -> AppRequestContext {
AppRequestContext {}
}
}
// endpoint returning a list of `Issue` nodes
fn resolve_top_issue(facade: ResolverFacade<AppRequestContext>) -> BoxFuture<ExecutionResult> {
Box::pin(async move {
let top_issue = facade.node(
"Issue",
hashmap! {
"name".to_string() => Value::from("Learn more rust".to_string()),
"points".to_string() => Value::from(Into::<i64>::into(5))
},
);
facade.resolve_node(&top_issue).await
})
}
#[tokio::main]
async fn main() {
// parse warpgrapher config
let config = Configuration::try_from(CONFIG.to_string()).expect("Failed to parse CONFIG");
// define database endpoint
let db = CypherEndpoint::from_env()
.expect("Failed to parse cypher endpoint from environment")
.pool()
.await
.expect("Failed to create cypher database pool");
// define resolvers
let mut resolvers = Resolvers::<AppRequestContext>::new();
resolvers.insert("TopIssue".to_string(), Box::new(resolve_top_issue));
// create warpgrapher engine
let engine: Engine<AppRequestContext> = Engine::new(config, db)
.with_resolvers(resolvers)
.build()
.expect("Failed to build engine");
// create new project
let query = "
query {
TopIssue {
name
points
}
}
"
.to_string();
let metadata = HashMap::new();
let result = engine.execute(query, None, metadata).await.unwrap();
// verify result
println!("result: {:#?}", result);
}
Dynamic Props
When Warpgrapher auto-generates a CRUD endpoint, the values of Node and Relationship properties are retreived from the database and returned in a query. In some cases, however, it may be necessary to perform real-time computations to derive the value of a prop. We call these type of properties "dynamic properties", and Warpgrapher provides a mechanism to execute custom logic to resolve their values.
Configuration
In the configuration below, points
is a dynamic property on the Project
type. It has an associated resolver name of resolve_project_points
. That name will be used later to connect the Rust resolver function to this entry in the configuration.
model:
- name: Project
props:
- name: name
type: String
- name: points
type: Int
resolver: resolve_project_points
";
Implementation
The implementation below defines the resolver. In this example, the resolver simply returns a constant value. In a real system, the implementation might retrieve records and do some calculation to total up a number of points associated with a project.
fn resolve_project_points(facade: ResolverFacade<AppRequestContext>) -> BoxFuture<ExecutionResult> {
Box::pin(async move {
// compute value
let points = 5;
facade.resolve_scalar(points)
})
}
Add Resolvers to the Engine
The code in the snippet below adds the resolver function to a map. They key is the name for the custom resolver that was used in the configuration, above. The map is then passed to the Wargrapher engine, allowing the engine to find the resolver function when the dynamic property must be resolved.
let mut resolvers = Resolvers::<AppRequestContext>::new();
resolvers.insert(
"resolve_project_points".to_string(),
Box::new(resolve_project_points),
);
// create warpgrapher engine
let engine: Engine<AppRequestContext> = Engine::new(config, db)
.with_resolvers(resolvers)
.build()
.expect("Failed to build engine");
Example API Call
The following GraphQL query uses the dynamic resolver defined above.
// create new project
let query = "
mutation {
ProjectCreate(input: {
name: \"Project1\"
}) {
id
points
}
}
"
.to_string();
let metadata = HashMap::new();
let result = engine.execute(query, None, metadata).await.unwrap();
Full Example Source
See below for the full source code to the example above.
use std::collections::HashMap;
use std::convert::TryFrom;
use warpgrapher::engine::config::Configuration;
use warpgrapher::engine::context::RequestContext;
use warpgrapher::engine::database::cypher::CypherEndpoint;
use warpgrapher::engine::database::DatabaseEndpoint;
use warpgrapher::engine::resolvers::{ExecutionResult, ResolverFacade, Resolvers};
use warpgrapher::juniper::BoxFuture;
use warpgrapher::Engine;
static CONFIG: &str = "
version: 1
model:
- name: Project
props:
- name: name
type: String
- name: points
type: Int
resolver: resolve_project_points
";
#[derive(Clone, Debug)]
struct AppRequestContext {}
impl RequestContext for AppRequestContext {
type DBEndpointType = CypherEndpoint;
fn new() -> AppRequestContext {
AppRequestContext {}
}
}
fn resolve_project_points(facade: ResolverFacade<AppRequestContext>) -> BoxFuture<ExecutionResult> {
Box::pin(async move {
// compute value
let points = 5;
facade.resolve_scalar(points)
})
}
#[tokio::main]
async fn main() {
// parse warpgrapher config
let config = Configuration::try_from(CONFIG.to_string()).expect("Failed to parse CONFIG");
// define database endpoint
let db = CypherEndpoint::from_env()
.expect("Failed to parse cypher endpoint from environment")
.pool()
.await
.expect("Failed to create cypher database pool");
// define resolvers
let mut resolvers = Resolvers::<AppRequestContext>::new();
resolvers.insert(
"resolve_project_points".to_string(),
Box::new(resolve_project_points),
);
// create warpgrapher engine
let engine: Engine<AppRequestContext> = Engine::new(config, db)
.with_resolvers(resolvers)
.build()
.expect("Failed to build engine");
// create new project
let query = "
mutation {
ProjectCreate(input: {
name: \"Project1\"
}) {
id
points
}
}
"
.to_string();
let metadata = HashMap::new();
let result = engine.execute(query, None, metadata).await.unwrap();
// verify result
assert_eq!(
"123456",
result
.get("data")
.unwrap()
.get("GetEnvironment")
.unwrap()
.as_str()
.unwrap(),
);
}
Dynamic Relationships
Dynamic relationships are similiar to dynamic properties, but returning dynamically calculated relationships to other nodes as opposed to individual properties.
Configuration
The configuration below includes a dynamic resolver called resolve_project_top_contributor
for the top_contributor
relationship. That resolver name will be used later to associate a Rust function to carry out the dynamic resolution.
static CONFIG: &str = "
version: 1
model:
- name: User
props:
- name: name
type: String
- name: Project
props:
- name: name
type: String
rels:
- name: top_contributor
nodes: [User]
Implementation
The next step is to define the custom resolution function in Rust. In this example, the custom relationship resolver creates a hard-coded node and relationship. In a real system, the function might load records and do some calculation or analytic logic to determine who is the top contributor to a project, and then return that user.
fn resolve_project_top_contributor(
facade: ResolverFacade<AppRequestContext>,
) -> BoxFuture<ExecutionResult> {
Box::pin(async move {
// create dynamic dst node
let mut top_contributor_props = HashMap::<String, Value>::new();
top_contributor_props.insert(
"id".to_string(),
Value::from(Uuid::new_v4().to_hyphenated().to_string()),
);
top_contributor_props.insert("name".to_string(), Value::from("user0".to_string()));
let top_contributor = facade.node("User", top_contributor_props);
// create dynamic rel
let rel_id = "1234567890".to_string();
let top_contributor_rel = facade.create_rel_with_dst_node(
Value::from(rel_id),
"topdev",
HashMap::new(),
top_contributor,
Add the Resolver to the Engine
The resolver is added to a map associated with the name used in the configuration, above. The map is then passed to the Warpgrapher engine. This allows the engine to find the Rust function implementing the custom resolver when it is needed.
.expect("Failed to parse cypher endpoint from environment")
.pool()
.await
.expect("Failed to create cypher database pool");
// define resolvers
let mut resolvers = Resolvers::<AppRequestContext>::new();
resolvers.insert(
"resolve_project_top_contributor".to_string(),
Box::new(resolve_project_top_contributor),
);
Example API Call
The following GraphQL query uses the dynamic resolver defined above.
let engine: Engine<AppRequestContext> = Engine::new(config, db)
.with_resolvers(resolvers)
.build()
.expect("Failed to build engine");
// create new project
let query = "
mutation {
ProjectCreate(input: {
name: \"Project1\"
}) {
id
top_contributor {
dst {
... on User {
id
name
}
}
}
Note that the Warpgrapher engine does not create a top level relationship query for properties that have custom resolvers. For example, there is no ProjectTopContributor
root level relationship query. This is because the standard Warpgarpher resolver generated for a relationship query would not know how to handle the dynamic relationship.
Full Example Source
See below for the full source code to the example above.
use std::collections::HashMap;
use std::convert::TryFrom;
use uuid::Uuid;
use warpgrapher::engine::config::Configuration;
use warpgrapher::engine::context::RequestContext;
use warpgrapher::engine::database::cypher::CypherEndpoint;
use warpgrapher::engine::database::DatabaseEndpoint;
use warpgrapher::engine::objects::Options;
use warpgrapher::engine::resolvers::{ExecutionResult, ResolverFacade, Resolvers};
use warpgrapher::engine::value::Value;
use warpgrapher::juniper::BoxFuture;
use warpgrapher::Engine;
static CONFIG: &str = "
version: 1
model:
- name: User
props:
- name: name
type: String
- name: Project
props:
- name: name
type: String
rels:
- name: top_contributor
nodes: [User]
resolver: resolve_project_top_contributor
";
#[derive(Clone, Debug)]
struct AppRequestContext {}
impl RequestContext for AppRequestContext {
type DBEndpointType = CypherEndpoint;
fn new() -> AppRequestContext {
AppRequestContext {}
}
}
fn resolve_project_top_contributor(
facade: ResolverFacade<AppRequestContext>,
) -> BoxFuture<ExecutionResult> {
Box::pin(async move {
// create dynamic dst node
let mut top_contributor_props = HashMap::<String, Value>::new();
top_contributor_props.insert(
"id".to_string(),
Value::from(Uuid::new_v4().to_hyphenated().to_string()),
);
top_contributor_props.insert("name".to_string(), Value::from("user0".to_string()));
let top_contributor = facade.node("User", top_contributor_props);
// create dynamic rel
let rel_id = "1234567890".to_string();
let top_contributor_rel = facade.create_rel_with_dst_node(
Value::from(rel_id),
"topdev",
HashMap::new(),
top_contributor,
Options::default(),
)?;
facade.resolve_rel(&top_contributor_rel).await
})
}
#[tokio::main]
async fn main() {
// parse warpgrapher config
let config = Configuration::try_from(CONFIG.to_string()).expect("Failed to parse CONFIG");
// define database endpoint
let db = CypherEndpoint::from_env()
.expect("Failed to parse cypher endpoint from environment")
.pool()
.await
.expect("Failed to create cypher database pool");
// define resolvers
let mut resolvers = Resolvers::<AppRequestContext>::new();
resolvers.insert(
"resolve_project_top_contributor".to_string(),
Box::new(resolve_project_top_contributor),
);
// create warpgrapher engine
let engine: Engine<AppRequestContext> = Engine::new(config, db)
.with_resolvers(resolvers)
.build()
.expect("Failed to build engine");
// create new project
let query = "
mutation {
ProjectCreate(input: {
name: \"Project1\"
}) {
id
top_contributor {
dst {
... on User {
id
name
}
}
}
}
}
"
.to_string();
let metadata = HashMap::new();
let result = engine.execute(query, None, metadata).await.unwrap();
// verify result
println!("result: {:#?}", result);
}
Request Context
In some cases, it's desirable to pass custom state information from your application into the Warpgrapher request cycle, so that your custom resolvers can make use of that information. The request context makes this passing of state possible.
Define the RequestContext
Every system using Warpgrapher defines a struct that implements RequestContext
. In addition to implementing the trait, that struct is free to carry additional state information. However, the context must implement Clone
, Debug
, Sync
, Send
, as well as Warpgrapher's RequestContext
trait. See the code snippet below for an example.
#[derive(Clone, Debug)]
struct AppRequestContext {
request_id: String,
}
impl RequestContext for AppRequestContext {
type DBEndpointType = CypherEndpoint;
fn new() -> AppRequestContext {
// generate random request id
let request_id = "1234".to_string();
AppRequestContext { request_id }
}
}
Engine Type Parameter
The struct that implements RequestContext
is passed to the Engine
as a type parameter, as shown in the code snippet below.
// create warpgrapher engine
let engine: Engine<AppRequestContext> = Engine::new(config, db)
.with_resolvers(resolvers)
.build()
.expect("Failed to build engine");
Access the Context
Once passed to the Engine
, the struct implementing RequestContext
is available to functions that implement custom endpoints and resolvers, as shown in the snippet below.
fn resolve_echo_request(facade: ResolverFacade<AppRequestContext>) -> BoxFuture<ExecutionResult> {
Box::pin(async move {
let request_context = facade.request_context().unwrap();
let request_id = request_context.request_id.clone();
facade.resolve_scalar(format!("echo! (request_id: {})", request_id))
})
}
Full Example Source
use std::collections::HashMap;
use std::convert::TryFrom;
use warpgrapher::engine::config::Configuration;
use warpgrapher::engine::context::RequestContext;
use warpgrapher::engine::database::cypher::CypherEndpoint;
use warpgrapher::engine::database::DatabaseEndpoint;
use warpgrapher::engine::resolvers::{ExecutionResult, ResolverFacade, Resolvers};
use warpgrapher::juniper::BoxFuture;
use warpgrapher::Engine;
static CONFIG: &str = "
version: 1
model:
- name: User
props:
- name: email
type: String
endpoints:
- name: EchoRequest
class: Query
input: null
output:
type: String
";
#[derive(Clone, Debug)]
struct AppRequestContext {
request_id: String,
}
impl RequestContext for AppRequestContext {
type DBEndpointType = CypherEndpoint;
fn new() -> AppRequestContext {
// generate random request id
let request_id = "1234".to_string();
AppRequestContext { request_id }
}
}
fn resolve_echo_request(facade: ResolverFacade<AppRequestContext>) -> BoxFuture<ExecutionResult> {
Box::pin(async move {
let request_context = facade.request_context().unwrap();
let request_id = request_context.request_id.clone();
facade.resolve_scalar(format!("echo! (request_id: {})", request_id))
})
}
#[tokio::main]
async fn main() {
// parse warpgrapher config
let config = Configuration::try_from(CONFIG.to_string()).expect("Failed to parse CONFIG");
// define database endpoint
let db = CypherEndpoint::from_env()
.expect("Failed to parse cypher endpoint from environment")
.pool()
.await
.expect("Failed to create cypher database pool");
// define resolvers
let mut resolvers = Resolvers::<AppRequestContext>::new();
resolvers.insert("EchoRequest".to_string(), Box::new(resolve_echo_request));
// create warpgrapher engine
let engine: Engine<AppRequestContext> = Engine::new(config, db)
.with_resolvers(resolvers)
.build()
.expect("Failed to build engine");
// execute query on `GetEnvironment` endpoint
let query = "
query {
EchoRequest
}
"
.to_string();
let metadata = HashMap::new();
let result = engine.execute(query, None, metadata).await.unwrap();
// verify result
println!("result: {:#?}", result);
assert_eq!(
"echo! (request_id: 1234)",
result
.get("data")
.unwrap()
.get("EchoRequest")
.unwrap()
.as_str()
.unwrap(),
);
}
Input Validation
In many cases, it's necessary to ensure that inputs are valid. What constitutes a valid input is up to the application, but it may mean that values have to be less than a certain length, within a certain range, and/or include or exclude certain characters. Warpgrapher makes it possible to write custom validation functions to reject invalid inputs.
Configuration
In the configuration snippet below, the name
property has a validator
field with the name NameValidator
. The NameValidator
string will be used later to connect the Rust function with this definition in the schema.
version: 1
model:
User
- name: User
props:
- name: name
type: String
required: true
validator: NameValidator
Implementation
The implementation below defines the input validation function itself. The function is relatively simple, rejecting the input if the name is "KENOBI". All other names are accepted.
fn name_validator(value: &Value) -> Result<(), Error> {
if let Value::Map(m) = value {
if let Some(Value::String(name)) = m.get("name") {
if name == "KENOBI" {
Err(Error::ValidationFailed {
message: format!(
"Input validator for {field_name} failed. Cannot be named KENOBI",
field_name = "name"
),
})
} else {
Ok(())
}
} else {
Err(Error::ValidationFailed {
message: format!(
"Input validator for {field_name} failed.",
field_name = "name"
),
})
}
} else {
Err(Error::ValidationFailed {
message: format!(
"Input validator for {field_name} failed.",
field_name = "name"
),
})
}
}
Add Validators to the Engine
The validators, such as the one defined above, are packaged into a map from the name(s) used in the configuration to the Rust functions. The map is then provided to the Warpgrapher Engine
as the engine is built.
// load validators
let mut validators: Validators = Validators::new();
validators.insert("NameValidator".to_string(), Box::new(name_validator));
// create warpgrapher engine
let engine: Engine<AppRequestContext> = Engine::new(config, db)
.with_validators(validators.clone())
.build()
.expect("Failed to build engine");
Example API Call
The follow example API call invokes the validator defined above.
let query = "
mutation {
UserCreate(input: {
name: \"KENOBI\"
}) {
id
name
}
}
"
.to_string();
let metadata = HashMap::new();
let result = engine.execute(query, None, metadata).await.unwrap();
Full Example Source
See below for the full source code to the example above.
use std::collections::HashMap;
use std::convert::TryFrom;
use warpgrapher::engine::config::Configuration;
use warpgrapher::engine::context::RequestContext;
use warpgrapher::engine::database::cypher::CypherEndpoint;
use warpgrapher::engine::database::DatabaseEndpoint;
use warpgrapher::engine::validators::Validators;
use warpgrapher::engine::value::Value;
use warpgrapher::{Engine, Error};
static CONFIG: &str = "
version: 1
model:
User
- name: User
props:
- name: name
type: String
required: true
validator: NameValidator
";
#[derive(Clone, Debug)]
struct AppRequestContext {}
impl RequestContext for AppRequestContext {
type DBEndpointType = CypherEndpoint;
fn new() -> AppRequestContext {
AppRequestContext {}
}
}
fn name_validator(value: &Value) -> Result<(), Error> {
if let Value::Map(m) = value {
if let Some(Value::String(name)) = m.get("name") {
if name == "KENOBI" {
Err(Error::ValidationFailed {
message: format!(
"Input validator for {field_name} failed. Cannot be named KENOBI",
field_name = "name"
),
})
} else {
Ok(())
}
} else {
Err(Error::ValidationFailed {
message: format!(
"Input validator for {field_name} failed.",
field_name = "name"
),
})
}
} else {
Err(Error::ValidationFailed {
message: format!(
"Input validator for {field_name} failed.",
field_name = "name"
),
})
}
}
#[tokio::main]
async fn main() {
// parse warpgrapher config
let config = Configuration::try_from(CONFIG.to_string()).expect("Failed to parse CONFIG");
// define database endpoint
let db = CypherEndpoint::from_env()
.expect("Failed to parse cypher endpoint from environment")
.pool()
.await
.expect("Failed to create cypher database pool");
// load validators
let mut validators: Validators = Validators::new();
validators.insert("NameValidator".to_string(), Box::new(name_validator));
// create warpgrapher engine
let engine: Engine<AppRequestContext> = Engine::new(config, db)
.with_validators(validators.clone())
.build()
.expect("Failed to build engine");
let query = "
mutation {
UserCreate(input: {
name: \"KENOBI\"
}) {
id
name
}
}
"
.to_string();
let metadata = HashMap::new();
let result = engine.execute(query, None, metadata).await.unwrap();
println!("result: {:#?}", result);
}
Event Handlers
The earlier sections of the book covered a great many options for customizing the behavior of Warpgrapher, including input validation, request context, custom endpoints, and dynamic properties and relationships. Warpgrapher offers an additional API, the event handling API, to modify Warpgrapher's behavior at almost every point in the lifecycle of a request. Event handlers may be added before Engine
creation, before or after request handling, and before or after nodes or relationships are created, read, updated, or deleted. This section will introduce the event handling API using an extended example of implementing a very simple authorization model. Each data record will be owned by one user, and only that user is entitled to read or modify that record.
Configuration
Unlike some of the other customization points in the Warpgrapher engine, no special configuration is required for event handlers. They are created and added to the Engine
using only Rust code. The data model used for this section's example is as follows.
version: 1
model:
- name: Record
props:
- name: content
type: String
Implementation
The example introduces four event hooks illustrating different lifecycle events. One event handler is set up for before the engine is built. It takes in the configuration and modifies it to insert an additional property allowing the system to track the owner of a given Record
. A second event handler runs before every request, inserting the current username into a request context so that the system can determine who is making a request, and thus whether that current user matches the ownership of the records being affected. The remaining event handlers run after node read events and before node modification events, in order to enforce the access control rules.
Before Engine Build
The following function is run before the engine is built. It takes in a mutable copy of the configuration to be used to set up Warpgrapher Engine
. This allows before engine build event handlers to make any concievable modification to the configuration. They can add or remove endpoints, types, properties, relationships, dynamic resolvers, validation, or anything else that can be included in a configuration.
/// before_build_engine event hook
/// Adds owner meta fields to all types in the model (though in this example, there's only one,
/// the record type)
fn add_owner_field(config: &mut Configuration) -> Result<(), Error> {
for t in config.model.iter_mut() {
let mut_props: &mut Vec<Property> = t.mut_props();
mut_props.push(Property::new(
"owner".to_string(),
UsesFilter::none(),
"String".to_string(),
false,
false,
None,
None,
));
}
Ok(())
}
In this example, the handler is iterating through the configuration, finding every type declared in the data model. To each type, the handler is adding a new owner property that will record the identity of the owner of the record. This will later be used to validate that only the owner can read and modify the data.
Before Request Processing
The following event hook function is run before every request that is processed by the Warpgrapher engine. In a full system implementation, it would likely pull information from the metadata
parameter, such as request headers like a JWT, that might be parsed to pull out user identity information. That data might then be used to look up a user profile in the database. In this case, the example simply hard-codes a username. It does, however, demonstrate the use of an application-specific request context as a means of passing data in for use by other event handlers or by custom resolvers.
/// This event handler executes at the beginning of every request and attempts to insert the
/// current user's profile into the request context.
fn insert_user_profile(
mut rctx: Rctx,
mut _ef: EventFacade<Rctx>,
_metadata: HashMap<String, String>,
) -> BoxFuture<Result<Rctx, Error>> {
Box::pin(async move {
// A real implementation would likely pull a user identity from an authentication token in
// metadata, or use that token to look up a full user profile in a database. In this
// example, the identify is hard-coded.
rctx.username = "user-from-JWT".to_string();
Ok(rctx)
})
}
Before Node Creation
The insert_owner
event hook is run prior to the creation of any new nodes. The Value
passed to the function is the GraphQL input type in the form of a Warpgrapher Value
. In this case, the function modifies the input value to insert an additional property, the owner of the node about to be created, which is set to be the username of the current user.
/// before_create event hook
/// Inserts an owner meta property into every new node containing the id of the creator
fn insert_owner(mut v: Value, ef: EventFacade<'_, Rctx>) -> BoxFuture<Result<Value, Error>> {
Box::pin(async move {
if let CrudOperation::CreateNode(_) = ef.op() {
if let Value::Map(ref mut input) = v {
let user_name = ef
.context()
.request_context()
.expect("Expect context")
.username
.to_string();
input.insert("owner".to_string(), Value::String(user_name));
}
}
Ok(v)
})
}
The modified input value is returned from the event hook, and when Warpgrapher continues executing the node creation operation, the owner property is included in the node creation operation, alongside all the other input properties.
After Node Read
The enforce_read_access
event hook, defined below, is set to run after each node read operation. The Rust function is passed a Vec
of nodes that that were read. The event hook function iterates through the nodes that were read, pulling out their owner property. That owner property is compared with the current logged in username. If the two match, the node belongs to the user, and the node is retained in the results list. If the two do not match, then the current logged in user is not the owner of the record, and the node is discarded from the results list without ever being passed back to the user.
/// after_read event hook
/// Filters the read nodes to those that are authorized to be read
fn enforce_read_access(
mut nodes: Vec<Node<Rctx>>,
ef: EventFacade<'_, Rctx>,
) -> BoxFuture<Result<Vec<Node<Rctx>>, Error>> {
Box::pin(async move {
nodes.retain(|node| {
let node_owner: String = node
.fields()
.get("owner")
.unwrap()
.clone()
.try_into()
.expect("Expect to find owner field.");
node_owner
== ef
.context()
.request_context()
.expect("Context expected")
.username
});
Ok(nodes)
})
}
Before Node Update and Delete
The enforce_write_access
event hook, shown below, is set to run before each node update or delete operation. The Rust function is passed the input
value that corresponds to the GraphQL schema input
argument type for the update or delete operation. In this example implementation, the function executes the MATCH
portion of the update or delete query, reading all the nodes that are intended to be modified. For each of the nodes read, the event handler tests whether the owner attribute is the current logged in username. If the two match, the node belongs to the current user, and it is kept in the result set. If the username does not match the owner property on the object, then the node is discarded.
Once the node list is filtered, the event handler constructs a new MATCH
query that will match the unique identifiers of all the nodes remaining in the filtered list. This new MATCH
query is returned from the event handler and used subsequently in Warpgrapher's automatically generated resolvers to do the update or deletion operation.
/// before_update event hook
/// Filters out nodes that the user is not authorized to modify
fn enforce_write_access(
v: Value,
mut ef: EventFacade<'_, Rctx>,
) -> BoxFuture<Result<Value, Error>> {
Box::pin(async move {
if let Value::Map(mut m) = v.clone() {
if let Some(input_match) = m.remove("MATCH") {
let nodes = &ef
.read_nodes("Record", input_match, Options::default())
.await?;
// filter nodes that are authorized
let filtered_node_ids: Vec<Value> = nodes
.iter()
.filter(|n| {
let node_owner: String =
n.fields().get("owner").unwrap().clone().try_into().unwrap();
node_owner
== ef
.context()
.request_context()
.expect("Expect context.")
.username
})
.map(|n| Ok(n.id()?.clone()))
.collect::<Result<Vec<Value>, Error>>()?;
// replace MATCH input with filtered nodes
m.insert(
"MATCH".to_string(),
Value::Map(hashmap! {
"id".to_string() => Value::Map(hashmap! {
"IN".to_string() => Value::Array(filtered_node_ids)
})
}),
);
// return modified input
Ok(Value::Map(m))
} else {
// Return original input unmodified
Ok(v)
}
} else {
// Return original input unmodified
Ok(v)
}
Although not necessary for this use case, the event handler could have just east as easily modified the SET
portion of the update query as the MATCH
, in some way adjusting the values used to update an existing node.
Add Handlers to the Engine
The event handlers are all added to an EventHandlerBag
which is then passed to the Warpgrapher engine. The registration function determines where in the life cycle the hook will be called, and in some cases, such as before and after node and relationship CRUD operation handlers, there are arguments to specify which nodes or relationships should be affected.
.expect("Failed to create cypher database pool");
let mut ehb = EventHandlerBag::new();
ehb.register_before_request(insert_user_profile);
ehb.register_before_engine_build(add_owner_field);
ehb.register_before_node_create(vec!["Record".to_string()], insert_owner);
ehb.register_after_node_read(vec!["Record".to_string()], enforce_read_access);
ehb.register_before_node_update(vec!["Record".to_string()], enforce_write_access);
ehb.register_before_node_delete(vec!["Record".to_string()], enforce_write_access);
// create warpgrapher engine
let engine: Engine<Rctx> = Engine::new(config, db)
.with_event_handlers(ehb)
Example API Call
The following GraphQL query triggers at least the first several event handlers in the call. Other queries and mutations would be needed to exercise all of them.
.expect("Failed to build engine");
let query = "
mutation {
RecordCreate(input: {
content: \"Test Content\"
}) {
id
name
}
}
"
.to_string();
Full Example Source
See below for the full source code to the example above.
use maplit::hashmap;
use std::collections::HashMap;
use std::convert::TryFrom;
use std::convert::TryInto;
use warpgrapher::engine::config::{Configuration, Property, UsesFilter};
use warpgrapher::engine::context::RequestContext;
use warpgrapher::engine::database::cypher::CypherEndpoint;
use warpgrapher::engine::database::CrudOperation;
use warpgrapher::engine::database::DatabaseEndpoint;
use warpgrapher::engine::events::{EventFacade, EventHandlerBag};
use warpgrapher::engine::objects::{Node, Options};
use warpgrapher::engine::value::Value;
use warpgrapher::juniper::BoxFuture;
use warpgrapher::{Engine, Error};
static CONFIG: &str = "
version: 1
model:
- name: Record
props:
- name: content
type: String
";
#[derive(Clone, Debug)]
pub struct Rctx {
pub username: String,
}
impl Rctx {}
impl RequestContext for Rctx {
type DBEndpointType = CypherEndpoint;
fn new() -> Self {
Rctx {
username: String::new(),
}
}
}
/// This event handler executes at the beginning of every request and attempts to insert the
/// current user's profile into the request context.
fn insert_user_profile(
mut rctx: Rctx,
mut _ef: EventFacade<Rctx>,
_metadata: HashMap<String, String>,
) -> BoxFuture<Result<Rctx, Error>> {
Box::pin(async move {
// A real implementation would likely pull a user identity from an authentication token in
// metadata, or use that token to look up a full user profile in a database. In this
// example, the identify is hard-coded.
rctx.username = "user-from-JWT".to_string();
Ok(rctx)
})
}
/// before_build_engine event hook
/// Adds owner meta fields to all types in the model (though in this example, there's only one,
/// the record type)
fn add_owner_field(config: &mut Configuration) -> Result<(), Error> {
for t in config.model.iter_mut() {
let mut_props: &mut Vec<Property> = t.mut_props();
mut_props.push(Property::new(
"owner".to_string(),
UsesFilter::none(),
"String".to_string(),
false,
false,
None,
None,
));
}
Ok(())
}
/// before_create event hook
/// Inserts an owner meta property into every new node containing the id of the creator
fn insert_owner(mut v: Value, ef: EventFacade<'_, Rctx>) -> BoxFuture<Result<Value, Error>> {
Box::pin(async move {
if let CrudOperation::CreateNode(_) = ef.op() {
if let Value::Map(ref mut input) = v {
let user_name = ef
.context()
.request_context()
.expect("Expect context")
.username
.to_string();
input.insert("owner".to_string(), Value::String(user_name));
}
}
Ok(v)
})
}
/// after_read event hook
/// Filters the read nodes to those that are authorized to be read
fn enforce_read_access(
mut nodes: Vec<Node<Rctx>>,
ef: EventFacade<'_, Rctx>,
) -> BoxFuture<Result<Vec<Node<Rctx>>, Error>> {
Box::pin(async move {
nodes.retain(|node| {
let node_owner: String = node
.fields()
.get("owner")
.unwrap()
.clone()
.try_into()
.expect("Expect to find owner field.");
node_owner
== ef
.context()
.request_context()
.expect("Context expected")
.username
});
Ok(nodes)
})
}
/// before_update event hook
/// Filters out nodes that the user is not authorized to modify
fn enforce_write_access(
v: Value,
mut ef: EventFacade<'_, Rctx>,
) -> BoxFuture<Result<Value, Error>> {
Box::pin(async move {
if let Value::Map(mut m) = v.clone() {
if let Some(input_match) = m.remove("MATCH") {
let nodes = &ef
.read_nodes("Record", input_match, Options::default())
.await?;
// filter nodes that are authorized
let filtered_node_ids: Vec<Value> = nodes
.iter()
.filter(|n| {
let node_owner: String =
n.fields().get("owner").unwrap().clone().try_into().unwrap();
node_owner
== ef
.context()
.request_context()
.expect("Expect context.")
.username
})
.map(|n| Ok(n.id()?.clone()))
.collect::<Result<Vec<Value>, Error>>()?;
// replace MATCH input with filtered nodes
m.insert(
"MATCH".to_string(),
Value::Map(hashmap! {
"id".to_string() => Value::Map(hashmap! {
"IN".to_string() => Value::Array(filtered_node_ids)
})
}),
);
// return modified input
Ok(Value::Map(m))
} else {
// Return original input unmodified
Ok(v)
}
} else {
// Return original input unmodified
Ok(v)
}
})
}
#[tokio::main]
async fn main() {
// parse warpgrapher config
let config = Configuration::try_from(CONFIG.to_string()).expect("Failed to parse CONFIG");
// define database endpoint
let db = CypherEndpoint::from_env()
.expect("Failed to parse cypher endpoint from environment")
.pool()
.await
.expect("Failed to create cypher database pool");
let mut ehb = EventHandlerBag::new();
ehb.register_before_request(insert_user_profile);
ehb.register_before_engine_build(add_owner_field);
ehb.register_before_node_create(vec!["Record".to_string()], insert_owner);
ehb.register_after_node_read(vec!["Record".to_string()], enforce_read_access);
ehb.register_before_node_update(vec!["Record".to_string()], enforce_write_access);
ehb.register_before_node_delete(vec!["Record".to_string()], enforce_write_access);
// create warpgrapher engine
let engine: Engine<Rctx> = Engine::new(config, db)
.with_event_handlers(ehb)
.build()
.expect("Failed to build engine");
let query = "
mutation {
RecordCreate(input: {
content: \"Test Content\"
}) {
id
name
}
}
"
.to_string();
let metadata = HashMap::new();
let result = engine.execute(query, None, metadata).await.unwrap();
println!("result: {:#?}", result);
}