graphdatascience is a Python client for operating and working with the Neo4j Graph Data Science (GDS) library. It enables users to write pure Python code to project graphs, run algorithms, as well as define and use machine learning pipelines in GDS.
The API is designed to mimic the GDS Cypher procedure API in Python code. It abstracts the necessary operations of the Neo4j Python driver to offer a simpler surface.
graphdatascience is only guaranteed to work with GDS versions 2.0+.
Please leave any feedback as issues on the source repository. Happy coding!
To install the latest deployed version of graphdatascience, simply run:
pip install graphdatascienceWhat follows is a high level description of some of the operations supported by graphdatascience. For extensive documentation of all capabilities, please refer to the GDS Python Client Manual.
Extensive end-to-end examples in Jupyter ready-to-run notebooks can be found in the examples source directory:
- Computing similarities with kNN based on FastRP embeddings
- Exporting from GDS and running GCN with PyG
- Load data to a projected graph via graph construction
The library wraps the Neo4j Python driver with a GraphDataScience object through which most calls to GDS will be made.
from graphdatascience import GraphDataScience # Use Neo4j URI and credentials according to your setup gds = GraphDataScience("bolt://localhost:7687", auth=None)There's also a method GraphDataScience.from_neo4j_driver for instantiating the gds object directly from a Neo4j driver object.
If we don't want to use the default database of our DBMS, we can specify which one to use:
gds.set_database("my-db")If you are connecting the client to an AuraDS instance, you can get recommended non-default configuration settings of the Python Driver applied automatically. To achieve this, set the constructor argument aura_ds=True:
from graphdatascience import GraphDataScience # Configures the driver with AuraDS-recommended settings gds = GraphDataScience("neo4j+s://my-aura-ds.databases.neo4j.io:7687", auth=("neo4j", "my-password"), aura_ds=True)Supposing that we have some graph data in our Neo4j database, we can project the graph into memory.
# Optionally we can estimate memory of the operation first res = gds.graph.project.estimate("*", "*") assert res["bytesMax"] < 1e12 G, res = gds.graph.project("graph", "*", "*") assert res["projectMillis"] >= 0The G that is returned here is a Graph which on the client side represents the projection on the server side.
The analogous calls gds.graph.project.cypher{,.estimate} for Cypher based projection are also supported.
We can also construct a GDS graph from client side pandas DataFrames. To do this we provide the gds.alpha.graph.construct method with node data frames (see schema here) and relationship data frames (see schema here).
nodes = pandas.DataFrame( { "nodeId": [0, 1, 2, 3], "labels": ["A", "B", "C", "A"], "prop1": [42, 1337, 8, 0], "otherProperty": [0.1, 0.2, 0.3, 0.4] } ) relationships = pandas.DataFrame( { "sourceNodeId": [0, 1, 2, 3], "targetNodeId": [1, 2, 3, 0], "relationshipType": ["REL", "REL", "REL", "REL"], "weight": [0.0, 0.0, 0.1, 42.0] } ) G = gds.alpha.graph.construct( "my-graph", # Graph name nodes, # One or more dataframes containing node data relationships # One or more dataframes containing relationship data )If your server uses GDS Enterprise edition and you have enabled its Arrow Apache server, the construction will be a lot faster. In this case you must also make sure that you have explicitly specified which database to use via GraphDataScience.set_database.
We can take a projected graph, represented to us by a Graph object named G, and run algorithms on it.
# Optionally we can estimate memory of the operation first (if the algo supports it) res = gds.pageRank.mutate.estimate(G, tolerance=0.5, mutateProperty="pagerank") assert res["bytesMax"] < 1e12 res = gds.pageRank.mutate(G, tolerance=0.5, mutateProperty="pagerank") assert res["nodePropertiesWritten"] == G.node_count()These calls take one positional argument and a number of keyword arguments depending on the algorithm. The first (positional) argument is a Graph, and the keyword arguments map directly to the algorithm's configuration map.
The other algorithm execution modes - stats, stream and write - are also supported via analogous calls. The stream mode call returns a pandas DataFrame (with contents depending on the algorithm of course). The mutate, stats and write mode calls however return a pandas Series with metadata about the algorithm execution.
The methods for doing topological link prediction are a bit different. Just like in the GDS procedure API they do not take a graph as an argument, but rather two node references as positional arguments. And they simply return the similarity score of the prediction just made as a float - not any kind of pandas data structure.
In this library, graphs projected onto server-side memory are represented by Graph objects. There are convenience methods on the Graph object that let us extract information about our projected graph. Some examples are (where G is a Graph):
# Get the graph's node count n = G.node_count() # Get a list of all relationship properties present on # relationships of the type "myRelType" rel_props = G.relationship_properties("myRelType") # Drop the projection represented by G G.drop()In GDS, you can train machine learning models. When doing this using the graphdatascience, you can get a model object returned directly in the client. The model object allows for convenient access to details about the model via Python methods. It also offers the ability to directly compute predictions using the appropriate GDS procedure for that model. This includes support for models trained using pipelines (for Link Prediction and Node Classification) as well as GraphSAGE models.
There's native support for Link prediction pipelines, Node classification pipelines, and Node regression pipeline. Apart from the call to create a pipeline, the GDS native pipelines calls are represented by methods on pipeline Python objects. Additionally to the standard GDS calls, there are several methods to query the pipeline for information about it.
Below is a minimal example for node classification (supposing we have a graph G with a property "myClass"):
pipe, _ = gds.beta.pipeline.nodeClassification.create("myPipe") assert pipe.type() == "Node classification training pipeline" pipe.addNodeProperty("degree", mutateProperty="rank") pipe.selectFeatures("rank") steps = pipe.feature_properties() assert len(steps) == 1 assert steps[0]["feature"] == "rank" pipe.addLogisticRegression(penalty=(0.1, 2)) model, res = pipe.train(G, modelName="myModel", targetProperty="myClass", metrics=["ACCURACY"]) assert model.metrics()["ACCURACY"]["test"] > 0 assert res["trainMillis"] >= 0 res = model.predict_stream(G) assert len(res) == G.node_count()Link prediction and Node regression works the same way, just with different method names for calls specific to that pipeline. Please see the GDS documentation for more on the pipelines' procedure APIs.
Assuming we have a graph G with node property x, we can do the following:
model, res = gds.beta.graphSage.train(G, modelName="myModel", featureProperties=["x"]) assert len(model.metrics()["epochLosses"]) == model.metrics()["ranEpochs"] assert res["trainMillis"] >= 0 res = model.predict_stream(G) assert len(res) == G.node_count()Note that with GraphSAGE we call the train method directly and supply all training configuration.
All procedures from the GDS Graph catalog are supported with graphdatascience. Some examples are (where G is a Graph):
res = gds.graph.list() assert len(res) == 1 # Exactly one graph is projected res = gds.graph.streamNodeProperties(G, "rank") assert len(res) == G.node_count()Further, there's a call named gds.graph.get (graphdatascience only). It takes a graph name as input and returns a Graph object, if a graph projection of that name exists in the user's graph catalog. The idea is to have a way of creating Graphs for already projected graphs, without having to do a new projection.
All procedures from the GDS Pipeline catalog are supported with graphdatascience. Some examples are (where pipe is a machine learning training pipeline object):
res = gds.beta.pipeline.list() assert len(res) == 1 # Exactly one pipeline is in the catalog res = gds.beta.pipeline.drop(pipe) assert res["pipelineName"] == pipe.name()Further, there's a call named gds.pipeline.get (graphdatascience only). It takes a pipeline name as input and returns a training pipeline object, if a pipeline of that name exists in the user's pipeline catalog. The idea is to have a way of creating pipeline objects for already existing pipelines, without having to create them again.
All procedures from the GDS Model catalog are supported with graphdatascience. Some examples are (where model is a machine learning model object):
res = gds.beta.model.list() assert len(res) == 1 # Exactly one model is loaded res = gds.beta.model.drop(model) assert res["modelInfo"]["modelName"] == model.name()Further, there's a call named gds.model.get (graphdatascience only). It takes a model name as input and returns a model object, if a model of that name exists in the user's model catalog. The idea is to have a way of creating model objects for already loaded models, without having to create them again.
When calling path finding or topological link prediction algorithms one has to provide specific nodes as input arguments. When using the GDS procedure API directly to call such algorithms, typically Cypher MATCH statements are used in order to find valid representations of input nodes of interest, see eg. this example in the GDS docs. To simplify this, graphdatascience provides a utility function, gds.find_node_id, for letting one find nodes without using Cypher.
Below is an example of how this can be done (supposing G is a projected Graph with City nodes having name properties):
# gds.find_node_id takes a list of labels and a dictionary of # property key-value pairs source_id = gds.find_node_id(["City"], {"name": "New York"}) target_id = gds.find_node_id(["City"], {"name": "Philadelphia"}) res = gds.shortestPath.dijkstra.stream(G, sourceNode=source_id, targetNode=target_id) assert res["totalCost"][0] == 100The nodes found by gds.find_node_id are those that have all labels specified and fully match all property key-value pairs given. Note that exactly one node per method call must be matched.
For more advanced filtering we recommend users do matching via Cypher's MATCH.
Operations known to not yet work with graphdatascience:
- Numeric utility functions (will never be supported)
- Cypher on GDS (might be supported in the future)
graphdatascience is licensed under the Apache Software License version 2.0. All content is copyright © Neo4j Sweden AB.
This work has been inspired by the great work done in the following libraries:
- pygds by stellasia
- gds-python by moxious