## Graph

```
Graph()
```

Graph

Simple append-only graph data structure. It keeps track of nodes, directed edges, and endpoint nodes.

**Note**

A node without edges counts as both an input and output node of the graph

**Attributes**

**nodes**(set) : set of node objects (object need tobe hashable)**edges_in**(dict) : a dictionary that maps nodes in the keys to a list of notes with edges coming into that node**edges_out**(dict) : a dictionary that maps nodes in the keys to a list of notes with edges coming from that node**in_nodes**(dict) : key-only dictionary (ordered set) with input nodes (nodes without input edges).**out_nodes**(dict) : key-only dictionary (ordered set) with output nodes of the graph (nodes without output edges)

**Methods:**

### .add_node

```
.add_node(
node
)
```

### .add_edge

```
.add_edge(
node1, node2
)
```

Adds a new edge to the graph

also removes nodes from input roots or outputs to reflect the current edge if necessary.

**Args**

**node1**(`Node`

) : starting node**node2**(`Node`

) : ending node

### .dependency_iter

```
.dependency_iter()
```

returns a dictionary with a map from nodes to dependency priorities with lower values having higher priority. Keys are ordered by priority from lower to higher and number of dependencies from lower to higher

Notes: Transversing a graph by priority guarantees that when we visit a node all it's dependencies have already been visited, additionally, ordering by number of dependencies guarantees that we can maintain a minimum result cache when transversing the graph.

**Returns**

**nodes**(`dict`

) : dictionary from nodes to (priorities,number of dependencies)

### .as_function

```
.as_function(
ord_inputs = None, ord_outputs = None, name = 'compiled_graph', compile = True
)
```

compiles the graph into a tensorflow callable compiled graph

Converts the current graph into a function with a series of `layer.compute(*tensors)`

calls
and uses `tf.function`

to compile this function to a Tensorflow static graph if compile is `True`

.
The resulting function is a closure with access to layer objects, to TensorFlow should be able to
trace the computations for each layer `compute`

call.

Another way to feed inputs to a graph is to use input layers and change the value, if the graphs are created without inputs, but the terminal input nodes are Dynamic Inputs, the execution of those layers is a read on their placeholder value, which you can change that value before calling the graph and the output will be correct.

```
input_layer.value = in0
input_Layer.value = in1
outputs = graph()
```

this adds a bit of a overhead since we have to write to the variable

Dev Note

- makes use of
`dependency_iter`

to create the computation calls such that when we call compute all the inputs needed as dependencies are already available.

**Args**

**ord_inputs**(`List[Node]`

) : list of input that determines the order of resulting function arguments**ord_outputs**(`List[Node`

]) : list of outputs used to determine the return order**name**(`str`

) : function name, must be a valid python function name**compile**(`bool`

) : if True, returns a tensorflow graph else returns a python function

**Returns**

**function**(`Callable`

) : an optimized TensorFlow static graph as a callable function or a python function

### .as_function_v2

```
.as_function_v2(
ord_inputs = None, ord_outputs = None, fn_name = 'compiled_graph',
stateful_inputs = False, compile = True
)
```

### .draw

```
.draw(
path
)
```

### .compute

```
.compute(
*input_values
)
```

computes the graph output values based on the given input values

**Args**

**input_values**: input values with the same order as the graph inputs, or a dictionary mapping values to input layers.

**Returns**

a tuple with the values for the correspondent graph outputs